Cross partition for 2 environment access for added security

“For testing two environments, I use Ubuntu Linux and Windows NTFS to create a cross partition for testing with logic from the below articles, I am sure you can easily intergrate the solution into BBM10 or Windows 8 RT, or Andriod or Apple iOS.”

“I started learning Windows XP and then Ubuntu Linux and I compare the difference in applications and study the source codes, if you cannot even handle Linux to compile for any environment, you do not have the skills to handle intelligent programming which requires very complex logic skills.” – Contributed by Oogle.

Single-partition and cross-data-grid transactions

The major distinction between WebSphere® eXtreme Scale and traditional data storage solutions like relational databases or in-memory databases is the use of partitioning, which allows the cache to scale linearly. The important types of transactions to consider are single-partition and every-partition (cross-data-grid) transactions.

In general, interactions with the cache can be categorized as single-partition transactions or cross-data-grid transactions, as discussed in the following section.

Single-partition transactions

Single-partition transactions are the preferable method for interacting with caches that are hosted byWebSphere eXtreme Scale. When a transaction is limited to a single partition, then by default it is limited to a single Java virtual machine, and therefore a single server computer. A server can complete M number of these transactions per second, and if you have N computers, you can complete M*N transactions per second. If your business increases and you need to perform twice as many of these transactions per second, you can double N by buying more computers. Then you can meet capacity demands without changing the application, upgrading hardware, or even taking the application offline.

In addition to letting the cache scale so significantly, single-partition transactions also maximize the availability of the cache. Each transaction only depends on one computer. Any of the other (N-1)computers can fail without affecting the success or response time of the transaction. So if you are running 100 computers and one of them fails, only 1 percent of the transactions in flight at the moment that server failed are rolled back. After the server fails, WebSphere eXtreme Scale relocates the partitions that are hosted by the failed server to the other 99 computers. During this brief period, before the operation completes, the other 99 computers can still complete transactions. Only the transactions that would involve the partitions that are being relocated are blocked. After the failover process is complete, the cache can continue running, fully operational, at 99 percent of its original throughput capacity. After the failed server is replaced and returned to the data grid, the cache returns to 100 percent throughput capacity.

Cross-data-grid transactions

In terms of performance, availability and scalability, cross-data-grid transactions are the opposite of single-partition transactions. Cross-data-grid transactions access every partition and therefore every computer in the configuration. Each computer in the data grid is asked to look up some data and then return the result. The transaction cannot complete until every computer has responded, and therefore the throughput of the entire data grid is limited by the slowest computer. Adding computers does not make the slowest computer faster and therefore does not improve the throughput of the cache.

Cross-data-grid transactions have a similar effect on availability. Extending the previous example, if you are running 100 servers and one server fails, then 100 percent of the transactions that are in progress at the moment that server failed are rolled back. After the server fails, WebSphere eXtreme Scale starts to relocate the partitions that are hosted by that server to the other 99 computers. During this time, before the failover process completes, the data grid cannot process any of these transactions. After the failover process is complete, the cache can continue running, but at reduced capacity. If each computer in the data grid serviced 10 partitions, then 10 of the remaining 99 computers receive at least one extra partition as part of the failover process. Adding an extra partition increases the workload of that computer by at least 10 percent. Because the throughput of the data grid is limited to the throughput of the slowest computer in a cross-data-grid transaction, on average, the throughput is reduced by 10 percent.

Single-partition transactions are preferable to cross-data-grid transactions for scaling out with a distributed, highly available, object cache like WebSphere eXtreme Scale. Maximizing the performance of these kinds of systems requires the use of techniques that are different from traditional relational methodologies, but you can turn cross-data-grid transactions into scalable single-partition transactions.

Best practices for building scalable data models

The best practices for building scalable applications with products like WebSphere eXtreme Scale include two categories: foundational principles and implementation tips. Foundational principles are core ideas that need to be captured in the design of the data itself. An application that does not observe these principles is unlikely to scale well, even for its mainline transactions. Implementation tips are applied for problematic transactions in an otherwise well-designed application that observes the general principles for scalable data models.

Foundational principles

Some of the important means of optimizing scalability are basic concepts or principles to keep in mind.

Duplicate instead of normalizing
The key thing to remember about products like WebSphere eXtreme Scale is that they are designed to spread data across a large number of computers. If the goal is to make most or all transactions complete on a single partition, then the data model design needs to ensure that all the data the transaction might need is located in the partition. Most of the time, the only way to achieve this is by duplicating data.For example, consider an application like a message board. Two very important transactions for a message board are showing all the posts from a given user and all the posts on a given topic. First consider how these transactions would work with a normalized data model that contains a user record, a topic record, and a post record that contains the actual text. If posts are partitioned with user records, then displaying the topic becomes a cross-grid transaction, and vice versa. Topics and users cannot be partitioned together because they have a many-to-many relationship.The best way to make this message board scale is to duplicate the posts, storing one copy with the topic record and one copy with the user record. Then, displaying the posts from a user is a single-partition transaction, displaying the posts on a topic is a single-partition transaction, and updating or deleting a post is a two-partition transaction. All three of these transactions will scale linearly as the number of computers in the data grid increases.
Scalability rather than resources
The biggest obstacle to overcome when considering denormalized data models is the impact that these models have on resources. Keeping two, three, or more copies of some data can seem to use too many resources to be practical. When you are confronted with this scenario, remember the following facts: Hardware resources get cheaper every year. Second, and more importantly, WebSphere eXtreme Scaleeliminates most hidden costs associated with deploying more resources.Measure resources in terms of cost rather than computer terms such as megabytes and processors. Data stores that work with normalized relational data generally need to be located on the same computer. This required collocation means that a single larger enterprise computer needs to be purchased rather than several smaller computers. With enterprise hardware, it is not uncommon for one computer to be capable of completing one million transactions per second to cost much more than the combined cost of 10 computers capable of doing 100,000 transactions per second each.A business cost in adding resources also exists. A growing business eventually runs out of capacity. When you run out of capacity, you either need to shut down while moving to a bigger, faster computer, or create a second production environment to which you can switch. Either way, additional costs will come in the form of lost business or maintaining almost twice the capacity needed during the transition period.With WebSphere eXtreme Scale, the application does not need to be shut down to add capacity. If your business projects that you need 10 percent more capacity for the coming year, then increase the number of computers in the data grid by 10 percent. You can increase this percentage without application downtime and without purchasing excess capacity.
Avoid data transformations
When you are using WebSphere eXtreme Scale, data should be stored in a format that is directly consumable by the business logic. Breaking the data down into a more primitive form is costly. The transformation needs to be done when the data is written and when the data is read. With relational databases this transformation is done out of necessity, because the data is ultimately persisted to disk quite frequently, but with WebSphere eXtreme Scale, you do not need to perform these transformations. For the most part data is stored in memory and can therefore be stored in the exact form that the application needs.Observing this simple rule helps denormalize your data in accordance with the first principle. The most common type of transformation for business data is the JOIN operations that are necessary to turn normalized data into a result set that fits the needs of the application. Storing the data in the correct format implicitly avoids performing these JOIN operations and produces a denormalized data model.
Eliminate unbounded queries
No matter how well you structure your data, unbounded queries do not scale well. For example, do not have a transaction that asks for a list of all items sorted by value. This transaction might work at first when the total number of items is 1000, but when the total number of items reaches 10 million, the transaction returns all 10 million items. If you run this transaction, the two most likely outcomes are the transaction timing out, or the client encountering an out-of-memory error.The best option is to alter the business logic so that only the top 10 or 20 items can be returned. This logic alteration keeps the size of the transaction manageable no matter how many items are in the cache.
Define schema
The main advantage of normalizing data is that the database system can take care of data consistency behind the scenes. When data is denormalized for scalability, this automatic data consistency management no longer exists. You must implement a data model that can work in the application layer or as a plug-in to the distributed data grid to guarantee data consistency.Consider the message board example. If a transaction removes a post from a topic, then the duplicate post on the user record needs to be removed. Without a data model, it is possible a developer would write the application code to remove the post from the topic and forget to remove the post from the user record. However, if the developer were using a data model instead of interacting with the cache directly, theremovePost method on the data model could pull the user ID from the post, look up the user record, and remove the duplicate post behind the scenes.Alternately, you can implement a listener that runs on the actual partition that detects the change to the topic and automatically adjusts the user record. A listener might be beneficial because the adjustment to the user record could happen locally if the partition happens to have the user record, or even if the user record is on a different partition, the transaction takes place between servers instead of between the client and server. The network connection between servers is likely to be faster than the network connection between the client and the server.
Avoid contention
Avoid scenarios such as having a global counter. The data grid will not scale if a single record is being used a disproportionate number of times compared to the rest of the records. The performance of the data grid will be limited by the performance of the computer that holds the given record.In these situations, try to break the record up so it is managed per partition. For example consider a transaction that returns the total number of entries in the distributed cache. Instead of having every insert and remove operation access a single record that increments, have a listener on each partition track the insert and remove operations. With this listener tracking, insert and remove can become single-partition operations.Reading the counter will become a cross-data-grid operation, but for the most part, it was already as inefficient as a cross-data-grid operation because its performance was tied to the performance of the computer hosting the record.

 

Implementation tips

You can also consider the following tips to achieve the best scalability.

Use reverse-lookup indexes

Consider a properly denormalized data model where customer records are partitioned based on the customer ID number. This partitioning method is the logical choice because nearly every business operation performed with the customer record uses the customer ID number. However, an important transaction that does not use the customer ID number is the login transaction. It is more common to have user names or e-mail addresses for login instead of customer ID numbers.The simple approach to the login scenario is to use a cross-data-grid transaction to find the customer record. As explained previously, this approach does not scale.The next option might be to partition on user name or e-mail. This option is not practical because all the customer ID based operations become cross-data-grid transactions. Also, the customers on your site might want to change their user name or e-mail address. Products like WebSphere eXtreme Scale need the value that is used to partition the data to remain constant.The correct solution is to use a reverse lookup index. With WebSphere eXtreme Scale, a cache can be created in the same distributed grid as the cache that holds all the user records. This cache is highly available, partitioned and scalable. This cache can be used to map a user name or e-mail address to a customer ID. This cache turns login into a two partition operation instead of a cross-grid operation. This scenario is not as good as a single-partition transaction, but the throughput still scales linearly as the number of computers increases.

Compute at write time

Commonly calculated values like averages or totals can be expensive to produce because these operations usually require reading a large number of entries. Because reads are more common than writes in most applications, it is efficient to compute these values at write time and then store the result in the cache. This practice makes read operations both faster and more scalable.

Optional fields

Consider a user record that holds a business, home, and telephone number. A user could have all, none or any combination of these numbers defined. If the data were normalized then a user table and a telephone number table would exist. The telephone numbers for a given user could be found using a JOIN operation between the two tables.De-normalizing this record does not require data duplication, because most users do not share telephone numbers. Instead, empty slots in the user record must be allowed. Instead of having a telephone number table, add three attributes to each user record, one for each telephone number type. This addition of attributes eliminates the JOIN operation and makes a telephone number lookup for a user a single-partition operation.

Placement of many-to-many relationships

Consider an application that tracks products and the stores in which the products are sold. A single product is sold in many stores, and a single store sells many products. Assume that this application tracks 50 large retailers. Each product is sold in a maximum of 50 stores, with each store selling thousands of products.Keep a list of stores inside the product entity (arrangement A), instead of keeping a list of products inside each store entity (arrangement B). Looking at some of the transactions this application would have to perform illustrates why arrangement A is more scalable.First look at updates. With arrangement A, removing a product from the inventory of a store locks the product entity. If the data grid holds 10000 products, only 1/10000 of the grid needs to be locked to perform the update. With arrangement B, the data grid only contains 50 stores, so 1/50 of the grid must be locked to complete the update. So even though both of these could be considered single-partition operations, arrangement A scales out more efficiently.Now, considering reads with arrangement A, looking up the stores at which a product is sold is a single-partition transaction that scales and is fast because the transaction only transmits a small amount of data. With arrangement B, this transaction becomes an cross-data-grid transaction because each store entity must be accessed to see if the product is sold at that store, which reveals an enormous performance advantage for arrangement A.

Scaling with normalized data

One legitimate use of cross-data-grid transactions is to scale data processing. If a data grid has 5 computers and a cross-data-grid transaction is dispatched that sorts through about 100,000 records on each computer, then that transaction sorts through 500,000 records. If the slowest computer in the data grid can perform 10 of these transactions per second, then the data grid is capable of sorting through 5,000,000 records per second. If the data in the grid doubles, then each computer must sort through 200,000 records, and each transaction sorts through 1,000,000 records. This data increase decreases the throughput of the slowest computer to 5 transactions per second, thereby reducing the throughput of the data grid to 5 transactions per second. Still, the data grid sorts through 5,000,000 records per second.In this scenario, doubling the number of computer allows each computer to return to its previous load of sorting through 100,000 records, allowing the slowest computer to process 10 of these transactions per second. The throughput of the data grid stays the same at 10 requests per second, but now each transaction processes 1,000,000 records, so the grid has doubled its capacity in terms of processing records to 10,000,000 per second.Applications such as a search engine that need to scale both in terms of data processing to accommodate the increasing size of the Internet and throughput to accommodate growth in the number of users, you must create multiple data grids, with a round robin of the requests between the grids. If you need to scale up the throughput, add computers and add another data grid to service requests. If data processing needs to be scaled up, add more computers and keep the number of data grids constant.

No more Ethernet Connections

No more Ethernet Connections

I have already invented the next Light Pulsar Fibre Optics connection, no more fixed ports for transmission of data, there is a sequential program that will scan all ports for incoming traffic, and will intelligent route traffic both incoming and outgoing, even store and forward, when the path is blocked, finding the best routes to reach the destination, no more hopping but intelligent mapping of NAT tables, which will be propergated when you connect to the next generation Internet, even with special MAC addressing that can support 100 billion devices, the support of a huge mix of protocols to support voice, video and everything including the next generation 3D Search Engine.

– Contributed by Oogle.

A 360 degree security camera to scan for anything

“What you see is a panorama 360 degree image that I can programmed with web controls for navigation, by merging multiple views from 3 cameras, and combining with face recognition system and voice recognition system that recognises the entire dimension as co-ordinates. It has object recognition technology that can even spot a man from an animal, all types of vehicles/crafts and their types, models, color and license plates. Since I cannot register my patents in Singapore/US, I will give it freely to China/Russia, in less than 3 years, US will not lead in technology anymore. Welcome to the 22nd century.” – Contributed by Oogle.

http://www.dailymail.co.uk/sciencetech/article-2260276/New-York-youve-seen-Incredible-interactive-panorama-lets-zoom.html

How to protect your network when under attack

Note to all Network Administrators ;
Mode of attack thru port 80 and 111(port mapper)
If you open up port 111, I can see every available resources on your network and I can easily bypass any security you setup.
Solution run proxy server services but do not use standard 538(TCP & UDP) for gdomapping, use other ports. Once you configure the access of 1 pc without any issue, you then then expand to cover your entire network. Therefore you must be very familar with your firewall configurations, the difference between IP4 & IP6 protocol. There is no firewall software can block IP6, so you need to manually filter and isolate it, especially unknown packets, dropping all the packets. Therefore do not upgrade to IP6 yet but fall back on IP4 as the attackers are using IP6 to attack all networks.

– Contributed by Oogle

Networks of the Future

LTE networks is an intermediary technology which is too expensive to implement worldwide, only on very densely populated areas where the returns are justified, Extended WiFi, TD-SCDMA and other new high density spectrum and fibre-over-powerlines will be the networks of the future. Offloading by a meshed network of Extended WiFi will be popular when 3G networks are congested, everyone will try to squeeze as much as possible from existing networks and spectrum frequencies, solutions which are cheaper will make 4G adoption limited. Wireless routers of the future will have Extended WiFi with a free channel for everyone access for a meshed network, as cheap as wireless routers of today but capable of intelligent matrix routing, even supporting voice/video across all networks with automatic switching technology. Fibre-over-powerlines will fill in the gap for extreme bandwidth and everywhere in the world, even in mountains and aircrafts, ships, high speed broadband and communications is possible. My Intelligent Matrix routing uses GPS up to 4 feet, and Radar pulsar up to a millimeter for co-ordinates, even space can be utilised, but that is a different matter altogether, using switching technology for uploads and downloads, even store and forward.

– Contributed by Oogle.

Mobile Data Offloading after LTE? TD-SCDMA with Extended WiFi

Mobile data offloading (MDO) is the use of complementary network technologies such as Wi-Fi and media optimization for offloading data that is originally directed for cellular networks. According to a study done by ABI Research in 2010, MDO is predicted to triple in the next five years as it helps telcos save money and relieve network traffic. 

Locally, there have been some concrete developments in MDO via Wi-Fi. The InfoComm Development Authority (IDA) of Singapore launched Wireless@SG in December 2006, which is a wireless broadband programme that is managed by three local wireless operators – iCell Network, M1 and SingTel. 

As part of the Next Generation National Infocomm Infrastructure initiative, Wireless@SG aims to extend broadband access beyond homes, schools and offices to public areas. Over the years, the Wireless@SG network has been constantly upgraded to meet the growing demands for greater surfing speeds. For example, the Wireless@SG Enhancement and Service Adoption programme was launched in June 2009 to improve the user experience in the following areas: 

  • Higher access speed (has been increased to 1Mbps since September 2009)

  • Making logins to the network easier 

  • Easier access to apps and services 

  • Wide range of services in payments, security, advertising and location-based apps 

Previously, users have to key in their login information and passwords if they want to access Wireless@SG. This doesn’t really translate to a seamless user experience. Hence, IDA launched Seamless and Secure Access (SSA) on 10 February 2010 to enable users to access Wireless@SG without the need to re-enter their passwords on each login. It works on a similar concept as how mobile phones automatically log on to the respective mobile networks when the devices are switched on. 

The new automatic log-in feature is supported by the three operators of Wireless@SG with their respective Wireless@SG Connect apps. In addition, these apps have a suite of personalized services and apps such as a hotspot finder, mobile messaging and directory search.

Just two months ago, IDA just issued a Call for Collaboration to invite interested operators and service providers to submit proposals for the proposed next phase of the Wireless@SG programme, which spans from 1st April 2013 to 31st March 2017. Several of its objectives include: 

  • Continued availability of free basic Wi-Fi services for the masses 

  • To enhance the registration and log-in process through the implementation of an interoperable SIM-based authentication mechanism by 1st April 2014 and the development of SSA enablers for non SIM-based authentication.

Wright and Gene also agreed that a growth area for Spirent Communications is assisting telcos to do testing on Wi-Fi offloading. Telcos want to offload the data traffic to Wi-Fi networks but hope to keep the customer base.

According to Wright, Wi-Fi offloading currently does not provide a satisfactory user experience. The trend now is telcos are trying to control the implementation of Wi-Fi offloading; managing the offloading in way that is completely seamless to consumers yet at the same offering the same level of quality and security. In his opinions, Wi-Fi offloading is still in its early stages, and it holds a lot of potential in the future.

Data throttling is a form of network management where a service provider intentionally slows down the Internet connections of consumers who use too much data on its network. 

Believe it or not – data throttling has become a common practice among telcos around the world. According to The New York Times, Verizon, AT&T and T-Mobile have made it public that they engage in data throttling to keep the mobile networks usable for everyone.

Wright shared that telcos in the U.S. actually monitor the type of data that consumers are using. If a consumer is found to be hogging the network, the telco actually throttles the speed back and give him or her very low data priority on the network.

How do telcos actually know that? Well, Gene said that telcos are building “smarter” networks known as app-aware or content aware networks, where they can look into the data and determine where the traffic comes from. This is how telcos identify the individuals that consume a lot of bandwidth and penalize them.

Wright and Gene asserted that such practices are not usually made public and telcos would not admit it. However, Wright felt that telcos are forced to do so due to the congestion of the mobile networks.

We reached out to the three telcos in Singapore on their takes on data throttling, and you will be very surprised by what we’ve found out. As you might know, IDA works with the various service providers to provide a reasonable level of service quality to consumers. The authority is aware that service providers may need to manage their networks some way or the other in order to optimize the quality of their services to subscribers in general. You can find out more about IDA’s stand on this here.

Upon further investigation on StarHub’s website, we unravelled more information on how it manages the traffic on its mobile network. Similar to what we learnt from Gene, StarHub deploys network traffic analysis to identify the types of apps and associated usage patterns.  

The telco has been working with its technology partners to lessen the heavy burden caused by peer-to-peer traffic and video streaming on its MaxMobile network through the implementation of traffic shaping. According to StarHub, traffic shaping is a network deployment technique to provide control over the volume of traffic being sent into the network, either by specifying a period of time or a maximum rate at which the traffic is sent. 

Traffic shaping is similar to traffic policing, but instead of dropping packets that exceed the bit rate limit, the packets are queued and metered out so as not to exceed the bit rate limit. In short, traffic shaping is achieved by delaying some packets but not dropping them.

To find out more about StarHub Mobile Network Management, click here.

While data throttling can come across as shocking to consumers who do not know about it previously, it does seems like a necessary evil. Telcos have stated that a minority of their customers, usually 10%, are hogging the bulk of the bandwidth. Moreover, telcos have cited tremendous growth in mobile data usage over the past few years as reasons why they are changing strategies so that consumers can enjoy a more consistent mobile surfing experience. 

Telcos’ Network Enhancement Plans

While the deployment of 4G LTE networks is a great step forward for telcos to deliver faster mobile surfing speeds, telcos also understand the need to continue investing and upgrading their networks.

To further manage wireless spectrum efficiently, StarHub – togther with Microsoft Singapore, the Institute for InfoComm Research (I2R) and other members – are working to test TV White Spaces technology, an intelligent and efficient way in managing unused TV broadcast frequency bands which is critical for the development of next-generation wireless broadband services and smart-city applications.

My Coupling Device can make use of fibre optics for powerlines, without the need to laying fibre optic cables, and both data and power and be transmitted via the GRID

Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms anelectromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunicationsindustry and have played a major role in the advent of the Information Age. Because of itsadvantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world.

The process of communicating using fiber-optics involves the following basic steps: Creating the optical signal involving the use of a transmitter, relaying the signal along the fiber, ensuring that the signal does not become too distorted or weak, receiving the optical signal, and converting it into an electrical signal.

Power line and connector splice sensor

The first application targeted for this technology is power line connector splices. These components connect sections of power line and are critical to reliability because they are in the series path of power transmission. With current levels commonly more than 1 kA (kiloAmpere), excess resistance in a splice connection will cause it to overheat and make it susceptible to catastrophic failure (such as a downed power line). Presently, line inspectors use infrared imaging to look for hot spots, but this has been ineffective because the emissivity of conductors and splices varies, making it difficult to interpret results. The approach to the splice sensor is twofold: to directly measure the temperature of the conductor/splice using a thermocouple, and to measure the current flowing through the line using a coil with a high-permeability core to sense the magnetic field strength. Both pieces of information are important because the temperature rise depends on the amount of current flowing through the line. 

When the current through the line is at least 80 Amps (which is normal), the signal from the coil is sufficient to self-power the sensor and a separate battery is not required. The sensor reading for this design contains four values: the present temperature, the present line current, the peak temperature and the line current measured at the time of the peak temperature. Proto-type backscatter splice sensors were successfully tested at EPRI’s high-voltage test facilities in Lenox, Mass., with currents up to 2 kA, voltages up to 140 kV and line temperatures to nearly 150 degrees Celsius. Mechanical testing was successfully performed at SwRI facilities for vibration and slip using existing standards for power line vibration dampers. Range testing beyond 200 feet was also successfully demonstrated. 


The insulator sensor is designed to fit around the Y-bolt that attaches the bell to the grounded structure. Thumb screws are used to secure the two halves of the current transformer. Leakage currents are continually monitored and peak levels are stored in a histogram.