More

OpenLayers3 get feature request for more than one point in the same place

OpenLayers3 get feature request for more than one point in the same place


I created a map that brings up a popup overlay on click, the JSFiddle is here. The only issue I have now is if I have two points in the same location it.

As you can see with the Birmingham point, when you click on the point it only brings up the value of the point written in last (on top), ideally I would like it to show the values of any points that are in this location?


Somehow, you have to create an offset between the markers to the user be able to click on each.

My suggestion is to listenpointermoveevent and show the markers separately. I was facing with this same issue and here is my solution.

UPDATE

So, based on comments. My idea is: create an array attribute of the features id at the same location. On click check this attr and then proceed the way you want.

var birmingham = new ol.Feature({ geometry: new ol.geom.Point(ol.proj.fromLonLat([-1.900878, 52.483952])), name: 'Birmingham', address: 'Here Address', Number:'7', children: [2,3] });

Your fiddle forked.


There are both advantages and disadvantages having two firewalls. While firewalls are not commonly exploited, they are prone to denial of service attacks.

In a topology with a single firewall serving both internal and external users (LAN and WAN), it acts as a shared resource for these two zones. Due to limited computing power, a denial of service attack on the firewall from WAN can disrupt services on the LAN.

In a topology with two firewalls, you protect internal services on the LAN from denial of service attacks on the perimeter firewall.

Of course, having two firewalls will also increase administrative complexity - you need to maintain two different firewall policies + backup and patching.

Some administrators prefer to only filter ingress traffic - this simplifies the firewall policy. The other practice is to maintain two seperate rulesets with both outbound and inbound filtering. If you need an opening from LAN to WAN, you will have to implement the rule on both firewalls. The rationale behind this is that a single error will not expose the whole network, only the parts the firewall is serving directly. The same error has to be done twice.

The main disadvantage is cost and maintenance, but in my opinion the advantages outweighs these.

Firewalls aren't like barricades that an attacker has to "defeat" to proceed. You bypass a firewall by finding some path through that isn't blocked. It's not so much a matter of how many obstacles you put up but rather how many pathways through you allow. As a rule, anything you can do with two firewalls (in the same spot) you can do with one.

Now, if you're putting the firewalls in different places for different reasons, that's another story. We can't all collectively share a single firewall.

Are two firewalls better than one ? There are two perspective to that from a hacker point of view it doesn't matter as they look for open ports for exploitation. From a network administrator point of view firewall do create a single point of failure. Using Multiple firewall can provide redundancy if an active device firewall fails then service traffic is switched to backup firewall. Depending upon the type of firewall deployed there must be synchronization between two firewalls before link switchover. Furthermore Multiple Firewall can be deployed in.

  1. Active-Standby Mode (Having Backup firewall in place)
  2. Load Balancing (Both Firewall in active mode)

There is potentially a scenario in which firewall software has a bug that causes it to allow traffic through which it shouldn't. In that case, there might be a benefit to having a second firewall behind the first running different software, on the assumption that the different software does not suffer from the same bug.

But how common are firewall bugs? My feeling is that they are not common enough to justify the additional complexity of this setup, which raises the risk of misconfiguration, and causes you to waste time that might be better spent securing the services behind the firewall.

Features and capabilities differ a lot between firewalls, so you can not simple ask if two firewalls would be better than a single one. I guess that you have a limited budget, so it might be better to get a single, but more capable firewall for the same money, than two cheaper firewalls, which even combined have less capabilities than the more expensive appliance.

But, if you have enough money it might be a good idea to combine capable devices from different vendors, so they get together a more secure system. Of course, it will not only cost more money to buy these firewalls, but also to maintain them. Each of them will have a different interface, slightly different capabilities or at least different ways to achieve the same thing. So you have to get administrators which are fluent in all these different models, maybe have to find some way to combine the different logs and alerts from the different firewalls etc.

So yes, it will be more secure, but only if you have the money to buy capable devices and administrators. If you are on a limited budget I would suggest to better get the best single device and administrator you can get for your money.

I agree with tylerl's answer above. However, reading the comments under that answer, there's contention where there shouldn't be. which is caused by confusion I hope to clear up. And then explain why I think two firewalls are not better than one for the question asked.

Point of contention: can a firewall be exploited? One group says no, while others say yes.

Here's the important question to ask: can you administer the firewall on the network?

For argument's sake, I setup a Linux box as a firewall to manage traffic between the internet and my home network (wired + wi-fi). I also configure it such that the firewall machine is NOT on the network, so the only way to make changes to the firewall rules is to physically logon to the machine.

In this administration un-networked configuration it is impossible to exploit my firewall from the internet or from my home network. To exploit it you would first need to break into my house and access the machine physically.

As pointed out by several people, the firewall aspect is simply a barricade that decides what traffic gets let in - there's nothing exploitable about that part.

But if I reconfigured my setup so the firewall machine so it is also connected to the same home network and made accessible via SSH, then I will be taking a risk. Any attacker that gets into my home network (through whatever ports that were open from the firewall) could then utilize some SSH exploit to get in the firewall machine.

Let's say that our first firewall has some vulnerability and a malicious person is able to exploit it.

This implies your first firewall box was connected to a network accessible to attacker, and there was some accessible service like SSH to exploit.

If there's a second firewall after it, he/she should be able to stop the attack, right?

Depends - is your second firewall box also connected to a network accessible to the attacker? If so, then sure it could be exploited by a savvy attacker as well.

In reality I would not expect seasoned admins to make such rookie mistakes.


3 Answers 3

The important difference is that matrix parameters apply to a particular path element while query parameters apply to the request as a whole. This comes into play when making a complex REST-style query to multiple levels of resources and sub-resources:

It really comes down to namespacing.

Note: The 'levels' of resources here are categories and objects .

If only query parameters were used for a multi-level URL, you would end up with

This way you would also lose the clarity added by the locality of the parameters within the request. In addition, when using a framework like JAX-RS, all the query parameters would show up within each resource handler, leading to potential conflicts and confusion.

If your query has only one "level", then the difference is not really important and the two types of parameters are effectively interchangeable, however, query parameters are generally better supported and more widely recognized. In general, I would recommend that you stick with query parameters for things like HTML forms and simple, single-level HTTP APIs.


POST is more secure than GET for a couple of reasons.

GET parameters are passed via URL. This means that parameters are stored in server logs, and browser history. When using GET, it makes it very easy to alter the data being submitted the the server as well, as it is right there in the address bar to play with.

The problem when comparing security between the two is that POST may deter the casual user, but will do nothing to stop someone with malicious intent. It is very easy to fake POST requests, and shouldn't be trusted outright.

The biggest security issue with GET is not malicious intent of the end-user, but by a third party sending a link to the end-user. I cannot email you a link that will force a POST request, but I most certainly can send you a link with a malicious GET request. I.E:

I just wanted to mention that you should probably use POST for most of your data. You would only want to use GET for parameters that should be shared with others, i.e: /viewprofile.php?id=1234, /googlemaps.php?lat=xxxxxxx&lon=xxxxxxx

POST just puts the information in a different place (request message body) than GET (url). Some people feel like the latter exposes more information, which is true for some points (read down in the edit). From a point where an attacker would like to intercept your traffic, POST would be equally hard/easy for an attacker as for a GET.

If you want security so your request aren't exposed when it leaves end and start points, use SSL (https).

A valid point of Gumbo and Ladadada, logging of GET requests can happen more frequently than POST requests. For instance in the history of a browser (if you share that browser with someone else).

So this means you shouldn't be putting sensitive data in a GET request as a GET request might be exposed to people who are screenwatching.

As @Gumbo says URLs are logged and appear in more places thus GET requests are a little more insecure than POST requests. The point is that a lot of people think that POST requests are much more secure than GET reqs because they can see the data directly in the URL but using, for example, a proxy software that intercepts browser requests anyone can view and alter POST data.

Another point is that you have to think where do you use GET and POST because GET should be used only for operations that do not alter database information, only request or read information and POST data should be used when data are going to be altered. Some web scanners click on every link automaticaly (usually GET requests) and not in buttons or forms (usually POSTS requests) avoiding altering the database but if for example you put a delete operation after a link you have the risk that the link could be clicked by more automated tools more easily.

Obviously, web scanners also can "click" or follow buttons and forms but usually they differenciate and this behaviour could be modified to spider the web safely.


The Follow Questions and Answers feature is now live across the Network

We are happy to announce that the previously announced follow question and answer feature is now live across the Network, including Stack Overflow, all Stack Exchange sites, and all Meta sites. (International Stack Overflow sites will have it turned on in a day or two once we have translations all set.)

You can follow any question or answer (that you did not author) by clicking on the [follow] button that is shown in the menu immediately below the post (alongside the [share] button):

After you have followed a post, you will get inbox notifications for all new answers (in the case where you followed a question), comments, edits, and notices. You will not receive notifications for any action that you performed. As was mentioned in the earlier post, we are not making changes at this point to the notifications received by a post owner, or due to @mentions.

The initial release had been planned to only include question following. Thanks to many folks who chipped in over the past few weeks, we are happy to release both question and answer following at the same time. These are available in both the regular and mobile views.

We are still planning on two more related releases:

  1. Follows profile tab and question listing filter: A tab for follows will be added to your user profile activity page. Each user will be able to view this tab that will provide a listing of followed questions and answers, with standard sorts, and the ability to unfollow from the listing. Users (with the exception of moderators and authorized staff) will not be able to see this information about other users (nor will it be made public in the API, SEDE, or data dumps).
  2. We will be renaming the Favorites feature to Bookmarks. As we mentioned in the preview post, the feature will be the same as Favorites, with the name and icon updated to more accurately represent user expectations and usage. And you will be able to both "bookmark" and "follow" a question.

We hope that this new feature will allow you to have better access to the content that you care about and want to keep tabs on.

Update (June 11, 2020): The user profile tab and the rename of Favorites to Bookmarks have been released.


Many areas of RepairShopr allows for a digital signature with multiple input devices supported – you will always be able to remind a customer of what happened and when. With easy signatures via phone/tablet, touch screens, or USB signature capture devices!

Invoicing system with all the power a repair business or retail store will need.

  • Scan line items from barcode labels
  • Digital signatures for touch screen set ups
  • Recurring Invoicing for business contracts
  • Email, SMS and Snail Mail Invoices to customers
  • Payment links in invoice emails
  • Scan serial numbers to invoices for warranty tracking
  • Customizable payment types
  • Cart and deposit system for smooth checkout


For Further Reading

U.S. Census Bureau. U.S. Department of Commerce. “About Congressional Apportionment.” http://www.census.gov/population/apportionment/about/.

Eagles, Charles W. Democracy Delayed: Congressional Reapportionment and Urban–Rural Conflict in the 1920s. Athens, GA: University of Georgia Press, 2010.

Farrand, Max, ed. The Records of the Federal Convention of 1787. Rev. ed. 4 vols. (New Haven and London: Yale University Press, 1937).

Madison, James, Alexander Hamilton, John Jay. The Federalist Papers. (New York: Penguin Books, 1987).

Reid, John Phillip. The Concept of Representation in the Age of the American Revolution. (Chicago: University of Chicago Press, 1989).

Rossiter, Clinton. 1787: The Grand Convention. (New York: Macmillan, 1966)(.


Anything else I should know about migration?

Comments on the question that link to the homepage of the target site will be deleted. This is to remove redundant comments that tell the author to post on that site.

The destination question will have the same tags as the question on the origin site, except for tags that don't exist on the destination site. (In other words, migration will never create new tags on the destination site.)

  • If the destination site is a meta site, the discussion tag will be added to ensure that all questions there have a required tag (unless it already has a required tag or a tag with the same name as a required tag).
  • If none of the migrated question's tags exist on the destination site, and the destination is not a meta site, then the untagged tag will be added to the question (so long as the migration wouldn't be blocked per the rules above - see What causes a migration to be blocked and what happens after?).
  • Moderator-only (red) tags aren't migrated over to the destination site.

Locking a question that has been migrated from another site and closed without rejecting the migration will cause the "migration rejected" notice to show up on the question, regardless of whether or not the question is actually a rejected migration. Any other lock reason notice will be ignored, and the timestamp on the "migration rejected" notice will be the time it was locked.

Moderators have the ability to clear the migration history of a question if needed. This delinks the question from the migration stub on the origin site, and makes it act like it was just posted on the destination site in the first place. This is often used to prevent the effects of a rejected migration if the original site's stub still exists, or to re-migrate a question that was previously migrated from another site.

If a question that is migrated is reopened on the origin site and subsequently re-migrated to the same site, it will be linked to the previously-created question on the destination site, rather than creating a new question there.

1 After 60 days, migrations can only be performed by Stack Exchange employees. These are performed only in very, very rare procedural cases and are usually not done on request.

2 Not every site has selectable migration paths in particular, beta sites, recently-graduated sites, and Meta Stack Exchange don't have any selectable sites (other than the site's per-site meta, if applicable, and vice versa). On such sites, only moderators can migrate questions out of that site (as they can choose any site to migrate to).

3 Moderators may migrate to any site in the Stack Exchange network, including all meta sites, and aren't limited to just the five listed sites.

4 As a moderator, if you want to migrate a question that has already been migrated from another site, you must first clear its original migration history. Only SE staff can migrate already-migrated questions without doing so. Note that if a staff member re-migrates a question already migrated from another site, it won't be marked "rejected".


Цены на Amazon S3

Платите только за то, чем пользуетесь. Минимальные платежи отсутствуют. При хранении данных и управлении ими в Amazon S3 необходимо учесть шесть составляющих стоимости: стоимость хранилища, стоимость запросов и извлечения данных, стоимость перемещения данных и ускорения перемещения, стоимость управления и аналитики данных, а также стоимость обработки данных с помощью S3 Object Lambda.

Вы оплачиваете хранение объектов в корзинах S3. Плата начисляется по тарифу, зависящему от размера объектов, продолжительности их хранения в течение месяца и класса хранилища: S3 Standard, S3 Intelligent‑Tiering, S3 Standard – Infrequent Access, S3 One Zone – Infrequent Access, S3 Glacier, S3 Glacier Deep Archive и хранилище с пониженной избыточностью (RRS). За каждый объект в хранилище класса S3 Intelligent‑Tiering взимается ежемесячная плата за мониторинг и автоматизацию, которые позволяют отслеживать закономерности доступа и перемещать объекты между уровнями доступа в S3 Intelligent‑Tiering.

При использовании запросов PUT, COPY или политик на основе жизненного цикла для перемещения данных любой S3 класс хранилища за каждый запрос взимается плата за прием. Примите во внимание стоимость загрузки или перемещения, прежде чем помещать объекты в любой из классов хранилища. Оцените расходы с помощью Калькулятора цен AWS.

Если не указано иное, представленные здесь цены не включают применимые налоги и сборы, в том числе НДС и применимый налог с продаж. Для клиентов с платежным адресом в Японии использование AWS облагается потребительским налогом Японии. Подробнее см. на странице вопросов и ответов по потребительскому налогу.

Использование хранилища Amazon S3 рассчитывается в двоичных гигабайтах (ГБ), где 1 ГБ равняется 2 30 байтам. Эта единица измерения, принятая Международной электротехнической комиссией (МЭК), также называется «гибибайт» (ГиБ). Аналогичным образом 1 ТБ равняется 2 40 байтам, т. е. 1024 ГБ.

Информацию о ценах на хранилище со сниженной избыточностью см. на странице сведений о хранилище S3 со сниженной избыточностью.

Примеры расчета цен на S3 см. на странице вопросов и ответов по выставлению счетов за S3 или воспользуйтесь Калькулятором цен AWS.

* Минимальный допустимый размер объекта для автоматического распределения данных по уровням хранилища уровня S3 Intelligent‑Tiering составляет 128 КБ. Можно сохранять и более мелкие объекты, но за них будет взиматься плата по тарифам уровня Frequent Access. Минимальный оплачиваемый размер объекта в хранилищах S3 Standard – IA и S3 One Zone – IA составляет 128 КБ. Можно сохранять и более мелкие объекты, но плата за них будет взиматься как за объекты размером 128 КБ в хранилище соответствующего класса. Минимальный оплачиваемый срок хранения для хранилищ S3 Intelligent‑Tiering, S3 Standard – IA и S3 One Zone – IA составляет 30 дней. За объекты, которые были удалены до истечения 30‑дневного срока, взимается пропорционально рассчитанная плата, эквивалентная стоимости хранилища за оставшиеся дни. За объекты, которые были удалены, перезаписаны или перемещены в хранилища другого класса до истечения 30‑дневного срока, начисляется стандартная плата плюс пропорционально рассчитанная плата за запрос по количеству дней, оставшихся до окончания минимального 30‑дневного срока хранения. Сюда входят объекты, которые удаляются в результате файловых операций, выполняемых файловым шлюзом. Для объектов, хранение которых продолжалось более 30 дней, дополнительная плата за запрос не взимается. Для каждого объекта, заархивированного на уровне Archive Access или Deep Archive Access в S3 Intelligent-Tiering, Amazon S3 использует 8 КБ хранилища для имени объекта и других метаданных (плата начисляется по тарифам хранилища S3 Standard) и 32 КБ хранилища для индекса и связанных метаданных (плата начисляется по тарифам хранилища S3 Glacier и S3 Glacier Deep Archive).

** Хранилища Amazon S3 Glacier и S3 Glacier Deep Archive требуют дополнительные 32 КБ данных на каждый объект для индекса и метаданных S3 Glacier, плата за которые начисляется по соответствующему тарифу. Amazon S3 требует 8 КБ для каждого объекта для хранения и поддержания определяемого пользователем имени и метаданных объектов, архивируемых в S3 Glacier и S3 Glacier Deep Archive. Это позволяет в реальном времени получать список всех объектов в S3 с помощью API S3 LIST или отчета S3 Inventory. Архивированные объекты хранятся в S3 Glacier и S3 Glacier Deep Archive не менее 90 и 180 дней соответственно. За объекты, удаленные до истечения 90‑ или 180‑дневного срока, взимается плата за хранение, пропорциональная количеству оставшихся до этого срока дней. За объекты, которые были удалены, перезаписаны или перемещены в хранилища другого класса до истечения минимального срока хранения, начисляется стандартная плата плюс пропорционально рассчитанная плата за запрос по количеству дней, оставшихся до окончания минимального срока хранения. За объекты, хранившиеся дольше минимального срока хранения, плата за запрос не начисляется. Для каждого объекта, хранящегося в S3 Glacier или S3 Glacier Deep Archive, Amazon S3 добавляет 40 оплачиваемых килобайт для метаданных: 8 КБ оплачиваются по стандартным тарифам S3, а 32 КБ – по тарифам S3 Glacier или S3 Deep Archive. Клиенты, использующие API прямого доступа к S3 Glacier, могут узнать расценки на API настранице расценок на API S3 Glacier.

Плата начисляется за запросы к корзинам и объектам S3. Стоимость запросов S3 рассчитывается по типу запроса в соответствии с приведенной ниже таблицей и зависит от общего количества запросов. При использовании консоли Amazon S3 для просмотра хранилища начисляется плата за операции GET, LIST и другие вызовы, выполняемые в целях обеспечения просмотра. Плата начисляется по тем же тарифам, что и при выполнении аналогичных запросов через API / SDK. В руководстве разработчика S3 представлены технические подробности по следующим типам запросов: PUT, COPY, POST, LIST, GET, SELECT, перенос на основании политик жизненного цикла иизвлечение данных. Запросы DELETE и CANCEL являются бесплатными.

За запросы LIST для любых классов хранилищ взимается плата по тем же тарифам, что и за запросы PUT, COPY и POST в хранилищах S3 Standard.

Плата взимается за извлечение объектов из хранилищ S3 Standard – Infrequent Access, S3 One Zone – Infrequent Access, S3 Glacier и S3 Glacier Deep Archive. В руководстве разработчика S3 представлены технические подробности по запросам на извлечение данных.


Configuration Enhancements

Aruba Central now supports configuration and monitoring AOS-CX switches that are added to a UI group, using UI options and MultiEdit mode. For more information on the versions supported, see Supported AOS-CX Platforms.

You can configure 6200, 6300, 8320, 8325, 8360 Switch Series using UI options, MultiEdit mode, and templates, and configure 6405, 6410, and 8400 Switch Series using only templates.

Aruba Central offers several new menu items for AOS-CX switch configuration using UI groups. These menu items are described as follows:

    Properties —Administrators can now view or manage system properties such as contact, location, time zone, and VRFs.


Watch the video: Openlayers 6 Tutorial #6 - Layer Switcher