#Business

Cracuns – drone that can operate underwater and in the air

Cracun drone

It isn’t peculiar to encounter a Hawk while flying your drone, but don’t be surprised to confront with kraken in the coming days. Yes! It’s time that the drones have entered into water from air. Last week researches from the Applied Physics Laboratory (APL) at Johns Hopkins University disclosed a new drone concept named CRACUNS that stands for Corrosion Resistant Aerial Covert Unmanned Nautical System.

The submersible quadcopter is designed to launch from an underwater station or unmanned underwater vehicle (UUV), floating to the surface from depths of hundreds of feet. It can remain dormant when not in use even under gloomy environment. The focus was not just to help the quadcopter survive but also to make it functionally efficient, so that it will still be fine to fly even after being soaked underwater for very long duration extending to hours, days and even months.

One of the most exiting features of this amphibious copter is that it is made up of 3-D printed components that ensures sufficient buoyancy and making it light. The electronic components are secured inside a waterproof compartment and the outside motors are coated with anti-corrosive coatings. The whole step has been designed to be as affordable as possible, making CRACUNS the best option for perilous operations.

The team of researchers of APL, tested the Quadcopter under harsh littoral environment by letting the drone drowned for nearly two months in salt water and later when operated it showed the same operational efficiency. So it’s now the era for underwater voyage. However, FAA may come up with regulations not to affect the marine ecosystem. Setting it apart, CRACUNS can be effectively put to use for secretive missions.

[youtube https://www.youtube.com/watch?v=o17x3XTA-DM]

Say thanks for this article (0)
The community is supported by:
Become a sponsor
#Business
#Business #People
Meet the founder: An interview with Chris Brown as Mapstack is awarded prestigious UK Innovation Smart Grant
Nikita Marwaha Kraetzig 11.2.2023
AWESOME 1
#Business #Featured
The Earth Observation Hub Report: 2024 Industry Trends and Analysis
Avatar for Muthukumar Kumar
Muthukumar Kumar 03.13.2024
AWESOME 2
#Business
Deadline Extension: Nominations for the Global Top 100 Geospatial Companies of 2024 Now Accepted Until January 15th
Avatar for Muthukumar Kumar
Muthukumar Kumar 12.12.2023
AWESOME 7
Next article
#Featured #GeoDev

How we run OpenStreetMap service in Azure

Architecture, trade-offs and cost — what, why and how much.

As a .NET developer I tend to fall into the trap of approaching every software problem thinking how I will do it with C#/ASP.NET/MSSQL. As I am trying to break this habit, StopByStop project was as of yet my most valuable experience that forced me to think outside of the familiar toolbox.

When I first started prototyping the solution to show the highway exits and places of interest around them on the route from A to B for all US destinations — what eventually became StopByStop, I predictably started searching StackOverflow for samples to do “C# maps routing” which only took me so far. I was lucky to work with a really smart developer with OSM and GIS experience who introduced me to PostgreSQL and PostGIS and wrote the initial version of the backend.

Which was a great start, but I still had to improve scalability and host the service. After trying and rejecting one of cheap shared PostGIS hostings, I decided that while the best long-term solution would be to manage our own servers, in the short term Azure with its ability to resize instances was the best way to go.

Linux VM with PostgreSQL

I experimented with various Linux VM sizes on Azure and eventually settled with DS4 for VM and 1GB storage for the database.

The CPU ended up to be the bottleneck for query execution. Since PostgreSQL cannot use multiple cores to execute a single query, searching for exits and places of interest along the long route involved scanning multi-million size indices on several tables. My approach was to break long routes into shorter chunks which resulted in some loss of accuracy, but allowed me to launch up to 8 sub-queries simultaneously for every customer query.

The goal was to execute any uncached query in under 12 seconds on the DB side which may sound as not ambitious enough goal, but given that intended scenario for our service is for local destinations, we are hoping that in most scenarios customers will see their timeline appear much faster.

Azure VM resizing comes in handy for OSM data upload

So while for regular operation DS4 is enough, as I found out that to upload OSM data into database without spending several weeks on it, I need a bigger machine: DS14.

 

This deserves a separate blog post, but in a nutshell, uploading OSM data to PostgreSQL involves

osm2pgrouting loads the whole OSM file (~ 160GB for North America) in memory, so if most of it ends up in swap space, it will take weeks, even if swap space is on SSD drive. On DS14 with its 112GB of RAM I could complete osm2pgrouting in 3 days.

Premium Storage

What about storage — any way to save there? At first I considered getting something like D3 that has 200GB local SSD, that would be sufficient for North America OSM DB and I wouldn’t need extra storage. However the problem with local SSD is that is irrevocably lost with any hardware failure or when VM is resized. So this was not an option.

When choosing premium storage size I picked P30 (1GB) because anything below that (like P20) would have had less than 200MB/s transfer rate. After trying it out and looking with iostat tool I found that IO rate becomes bottleneck at this point and the price difference is not that significant.

Backup

I would like to be able to easily restore my database with OSM along with my custom indexes and tables on a different machine. This is in case I decide to scale out to process more queries simultaneously and ideally I wouldn’t want to wait 3 days for osm2pgrouting to complete. So this is what I found out worked best for me: backing up to Google drive using odeke-em/driveclient using pg_dump utility.

Frontend — using the right tool for the job?

So given that Linux server was required for PostgreSQL, the cheapest approach for frontend would have been to go with PHP or Node.JS.

Architecture that I DON’t have — I could have had my Linux server also host my web server with Node.JS or PHP

However I have experience in neither of them, so speed things up I went with the sub-optimal, but known path: ASP.NET, which allowed me get things done much faster, but the architecture ended up like this:

 
My dependency on ASP.NET and MS SQL has introduced 2 extra nodes in the architecture and is costing us extra $80 per month

I’m using ASP.NET on Azure Web App to host my web server and calling PostgreSQL using Npgsql provider. Also because I am addicted to LINQ to SQL (which doesn’t exist for PostgreSQL) and just plain lazy, I have another MSSQL db for non-GIS persistent data. Overall my dependency on Microsoft stack is costing us extra $80 per month. That is ok for now, but going forward we will definitely be switching to hosting web server on Linux box and now ASP.NET Core on Linux may be the best option for this migration.

For Web Application I had our site initially running on B1, but memory is the main bottleneck as I am keeping some of the datasets (like list of all cities, list of all exits, routes for active sessions) in memory. The lesson that I learnt is that when memory pressure is primarily introduced by such pre-cached datasets, scaling up, not scaling out is the right approach.

CDN — a must for customer-facing sites

And finally, this one is a non-brainer and the cost is insubstantial. CDN, along with other optimizations helped to significantly reduce page load time.

 

WebPageTest.org is a great tool to for client performance measurement and analysis.

And what’s this StopByStop you keep mentioning?

If you read that far you might be interested in what exactly we are trying to solve with OpenStreetMap and why — please read this and this.

The post originally was published on Medium and has been re-published on Geoawesomeness. You can follow Alex on Twitter @bulankou and on Medium.  

Read on
Search