02 August 2016

React/Redux coming from CQRS/ES

Looking at React + Redux, there is a noticeable similarity to CQRS + ES. These front-end and back-end concepts being aligned may be helpful for those that cross the boundary between front and back. However, there are some subtle differences that make the concepts "not quite" fit. Let's explore that.


Actions are Events

This can't be any more explicit than what is stated in the Redux documentation.
Actions describe the fact that something happened
http://redux.js.org/docs/basics/Reducers.html

Action Creators are Commands... and Their Handlers

The command and the command handler are squashed into the same concept. The "command" doesn't travel outside the application, so there's less need to convert it to a distinct command message. In fact, doing so feels awkward and redundant due to the next point.

An event (aka action) is almost always generated by a handler. One of the main reasons commands can fail on the back-end is because they are not trusted, so the code inside the handler must validate the command (aka protect invariants). On the front end, the command is considered trusted because the components + state are protecting the invariants. For example, you won't be able to issue a command (the button will be disabled) if the invariant is violated. (If you can, it's considered a bug.)

There's also how errors are handled. They are typically considered events on the UI (user needs to be notified), whereas command handlers typically just return errors as responses without affecting the domain.

What remains the same about the command handler between front- and back-ends is that the handler manages dependencies for the use case. On the front end that often takes the form of wrangling HTTP API calls.


Store is a View Database, Reducers are View Updaters

This is evident from the note on Reducers.
As your app grows, instead of adding stores, you split the root reducer into smaller reducers independently operating on the different parts of the state tree.
http://redux.js.org/docs/api/Store.html in A Note for Flux Users
Essentially, different properties off of the Store's state represent different views. A reducer is responsible for updating its own view.


But keep in mind...

I'm just getting started with React/Redux. These are mental models based on an understanding of CQRS/ES. "All models are wrong. Some of them are useful." (George E. P. Box) This doesn't map all the odds and ends from CQRS/ES and friends, but hopefully it's useful to you.

31 May 2016

Re Simplifying Message Handlers

In a previous post, Simplifying Message Handlers, I put forth a way to remove unnecessary interface declarations for message handlers (commonly seen in CQRS examples). I provided the research I used and left the implementation as an exercise to the reader. Today I'm going to revisit that topic. I'll explain my experiences with this process which led me to believe it was a bad idea. I'll also explain my current method of tying messages to code.

So wiring up with reflection and marker interfaces is very clever. (note: clever is a developer curse word.) So clever in fact, that it remained a mystery to my co-worker even after I explained it multiple times. She would always ask a perfectly reasonable question like: "Ok, but what causes this code to run?" Then I would show her the API call, which consults the handler collection to figure out who needs to get the message. That would lead to showing the bootstrap code, which called the reflection code to find all the handlers. By then her eyes were glaze over, and she would just label it "magic" and move on. The reflection code was just this side of inscrutable... relying on a small amount of arcane knowledge of .NET CLR internals. So it was hard to understand without actually debugging it or already knowing how reflection works.

The end result was that even after multiple explanations, she was afraid to touch anything in the project for fear of using the wrong incantation (marker interface and method declaration in this case). Needless to say, this presented some challenges. It became so that all items dealing with X were mine and Y were hers... exactly what you are supposed to combat on an Agile team. This is not even to mention the tooling issues, and production issues. In particular, we discovered the hard way that when an IIS application pool wakes from idle sleep it only loads assemblies which are directly referenced. This is different from its startup behavior where it loads all deployed assemblies into the app domain. So we would deploy and everything would work fine, then when we all go home and India starts work, the app would wake from sleep and crash with a TypeLoadException. As for tooling issues, I had to become blind to "this code is not referenced anywhere" type of warnings and get used to not being able to Go To Definition in certain cases.

So what do I do now? The most boring thing possible... I manually wire things up. It's crystal clear and leaves a reference trail that is easily discoverable by other developers and tooling. Once you settle on that, the remaining issue is optimizing for manual wiring. It is ideal to have only one place where you wire your implementations to the handling infrastructure.

03 February 2016

PC Building - Features I don't need

So I've nearly come to the end of planning my next PC build. (Let's be honest, the only reason I planned this thoroughly was because I was waiting for my tax refund.) And I've come to some conclusions about certain features and why I don't need them.

Thunderbolt

See my previous comments on the topic. It's mostly a mobile-oriented feature. USB 3.1 Gen2 is sufficient.

PEX Chips

These are also known as "PLX" (the name of the manufacturer) chips, My original thought here was to get an ASUS Z170-WS, which includes more electrical PCIe lanes (via a PEX chip). Then I could have more PCIe lanes for future expansion. However, after finding the electrical diagram in the manual, I realized that it was little more than a switch with an uplink speed still limited by the CPU's PCIe lanes. So even though you can run "SLI x16/x16", all those connections are still only sharing 16 lanes back to the CPU. One advantage the PEX chip might have (if designed like a network switch) is increased GPU to GPU bandwidth. However, I haven't looked into the chip architecture, so I don't know that for sure. I'm also not planning to run SLI anyway.

At the end of the day, you're not actually gaining PCIe bandwidth. You're still doing the same old splitting of the CPU's PCIe lanes, but in a more flexible way and at the cost of dollars and negligible latency.

Multiple M.2 ports

At first, I was scrambling to find motherboards with dual m.2 ports so I could (one day) put two Samsung 950 Pros in a RAID0. However, I later realized that this would be bottlenecked by the DMI 3.0 interface. There is a great article on these drives here. Writes do nearly double in RAID0, but reads are only increased by 1.4 or so due to DMI limits. And those results were likely not exercising the other devices which share DMI bandwidth: USB, LAN, other SATA devices... pretty much everything that's not a GPU PCIe slot. Also their real world tests show little difference between RAID0 and single-drive performance. In other words, having them in a RAID-0 vs a single drive makes little practical difference. Even if you had a workload that could notice a difference, m.2 bandwidth is still bottlenecked by DMI.

Now at this point, I realized to break the barriers, I would need to plug into a PCIe slot that ran straight to the CPU. Even ignoring the fact that I am stealing 8 lanes from the GPU (and that assuming I have some GPU in the future which can use more than 8), motherboards have problems booting from those. (Yes, manufacturers can put an HBA chip on there to make it bootable, but that adds to already-astronomical cost.) So basically, we're going to have to wait for chipset tech to catch up with storage tech and then buy new motherboards. From what I've seen about the upcoming chipsets, I don't believe it'll be this year. Although Optane is a bit of a wildcard. I can see PCIe SSDs being the most immediate step for it. NVRAM DIMMs could be a longer-term proposition requiring OS support and/or motherboard mfg development to work the kinks out over the new few years.

SLI/CrossFire

Like many, I bought a graphics card intending to later buy its twin for SLI or CrossFire and extend the useful gaming life of my rig. I have been meaning to do this for several generations now, but it has never happened. The reason is because graphics cards are advancing rapidly enough that by the time I need to update my rig with better graphics, there is always a sufficient single card upgrade. Considering that most graphics cards don't lose any performance going from x16 to x8 lanes, we still have a lot of room for single-card upgrades in the future. If the new graphics card can saturate my PCIe bus, then it's probably time for an computer upgrade anyway. (For example, my 6 year old computer with PCIe 2.0 may be at saturation with current gen GPUs... and its time for an upgrade.)

Now even if I was keeping less of an eye to performance per dollar, multi-GPU has some inherent downsides. Here is a great review on SLI perf. Games may not support it. Even if the game will use it the experience may be sub-par, with reports of graphical issues or even just no real difference in FPS. It's one of those "when it works, it's great." So the value proposition becomes even worse for multi-gpu.

That's not to say that I will never go multi-GPU... I will just no longer buy a less-powerful GPU with the plan of later buying another to make up the performance difference.

Note that I don't do video content creation/rendering, where an SLI/CrossFire setup could be consistently beneficial.

SATA Express

It's a dead spec, but you still see it on motherboards (whether you want it or not) because it was still a thing when the last motherboard design cycle started. There are no drives which use it. One of these connectors can be looked at as just 2 SATA ports. Although the most ingenious use of this port I have seen so far was by ASRock, who used one of these as a header for USB 3.1 front panel on it's Extreme+ Z170 motherboards.

NIC Teaming

Some people are interested in dual LANs for NIC Teaming. But unless we're talking about a server, NIC Teaming serves no real purpose. And in fact it can cause more problems than using a single NIC, due to having to carefully configure it for your usage. You can't just check a box and it magically works. Obviously, it doesn't increase your internet bandwidth. It also doesn't increase connection speeds to individual computers. Those are still limited by the other computer's link speed. At best it only allows more computers to connect to you at a time using their max speed. Those hoping for a gaming advantage will be sorely disappointed.

Redundancy is also a false advantage for gamers. The chances of your NIC going out are low to start with. And when it happens you probably want to know about it so you can a) unplug it since it can spam your network with garbage packets and b) disable it in the BIOS. To be fault tolerant to network problems and not just NIC failures, you are also going to need a whole redundant infrastructure (each port plugged into a different switch, each switch connected to a different internet provider, etc). I don't know anyone who goes to that expense at their home.

Having 2 NICs is nice in general, but having your sole NIC die (while the rest of the motherboard manages to be fine) is hardly much of a problem. It's a pretty simple matter to grab a PCIe x1 NIC like this one and be back on your merry way.

That's all I can think of for now...

22 January 2016

Building a new PC -- should I care about Thunderbolt 3? No.

I am looking at building a new PC, and I came across this question myself. Recently, some motherboard manufacturers have announced support for Thunderbolt 3. And what's not to love about it? Very high speed and backward- (or is it sideways-?) compatible with USB 3.1.

The issue is that you can't get everything you might want on one motherboard. The Intel Z170 chipset found with the TB3 Alpine Ridge controller has a limited number of PCI Express lanes for IO. It's 26 to be precise, 6 of which are basically reserved for USB 3.0 and interconnects, so 20 usable. So these 20 lanes have to be divvied up between PCIe slots, storage (including SATA and PCIe-based storage like m.2 and u.2), networking, Thunderbolt controllers, USB 3.1 controllers, etc. (Graphics cards still use the separate 16 lanes provided by the CPU.)

The Alpine Ridge Thunderbolt 3 controller takes up 4 PCIe lanes, which would only be ~32Gbps of the advertised 40Gbps, but it also hijacks the Display Port interconnect from the CPU to make up the difference. This is why I doubt we will ever see full-speed Thunderbolt 3 PCIe x4 cards. To get full speed, it will have to go in an x8 slot and split bandwidth with the GPU, as chipset PCIe lanes are only in x4 groupings.

So to get TB3, you will have to give up something... like an m.2 slot or 4 SATA ports or that 3rd or 4th (Crossfire) GPU or just an extra PCIex4 slot. But the question is whether it's worth trading towards. So, I looked at the proposed uses for Thunderbolt 3 through the lens of an enthusiast PC user and detailed my observations below. Note that these use cases could go quite differently for mobile or small form factor users. In fact the use as a docking station port for laptops is very compelling.

External Storage
This is perhaps the most likely enthusiast use of TB3. However, I see this only being used in specific cases where storage is a bottleneck (video content creation, for example). I don't see this being used by an average geek like me. My storage server is (would be) mainly for centralization purposes with large capacity disks. So, it doesn't make sense to build a new PC with TB3 for this purpose. I build the new PC for use as my workstation, then put the old one on storage duty. So my storage server isn't going to have TB3 anytime soon. Even if the old computer had it, I don't need to buy an external TB3 enclosure when my old computer has connections and room for drives internally.

Displays
This use case is not interesting for a desktop enthusiast/gaming PC. Monitors are plugged into dedicated graphics cards. These tend to directly shove pixels out to displays themselves for maximum performance. I can't imagine a graphics card mfg who wants to route their output through a Thunderbolt controller for output (risking performance), when they could send it to the display directly. It could be that graphics cards will eventually have TB3 video ports, but that's not anything I need to have on my motherboard.

Device Charging
Only for mobile scenarios. I don't have a pressing need for this feature when wall sockets still exist and are in more rooms than my computer.

USB3.1 (Gen2) compatibility
This is nice, but type C USB connectors are already present on most current-generation motherboards without TB3 for less IO budget.

External graphics
This is really only useful for mobile scenarios. My desktop already has a place set for a video card or two.

Thunderbolt Networking
The "for-free" scenario mentioned in press release was connecting 2 computers with a TB3 cable. That could be interesting for transferring data in limited scenarios (e.g. support), but will require software to make it work and TB3 becoming ubiquitous to make it even remotely likely. Using TB3 to hook into a larger network is far from "for-free". You'll likely buy a TB3 to RJ45 converter in that case. Your other option would be a switch with a TB3 connector, which is made less likely by the fact that TB3 cables top out at about 3m or 10ft. Increased distance optical TB3 cables could be a thing in the future, but will likely be much more expensive than an adapter + CAT6 cable.


In all, I don't imagine myself ever using Thunderbolt 3 from my desktop PC, so despite my initial inclination, I'm not going to aim for it when building a new PC. However, I will look for it on my wife's next laptop.

21 September 2015

Dates with REST services

No time, just a date. When I need a date (e.g. birth date), I would like to avoid dealing with timezones at all, because it doesn't matter what the time zone is on a date, as long as it displays the same date everywhere. I don't want the person in CST to see the date as different from the person in GMT. But the reality is that neither .NET nor Javascript have just a Date object without time and therefore time zone. So I want to use UTC midnight at all levels so that there is no chance of conversions causing a change in the way the date is saved or displayed.


Angular / Javascript


As near as I can tell, there is no way to make a Javascript Date object that is in UTC (unless the computer's local time is UTC). JS Dates are always in local time regardless of how they were created. So the first thing I need to do is give up on the Javascript Date object. It's worthless. The only way to make sure you are transmitting UTC to the server is to keep the value in ISO format string.

So as a result, I made an angular directive to accept/validate a formatted date and save it to the model in ISO format UTC. It also works with form validation. For now it only does US date format, but you are welcome to change it. If you do, I would put the format on the directive's attribute (e.g. date-format="M/d/yyyy"). No guarantees that this code has most efficient means of doing things, but it works for me.

(function () {
    angular
        .module('myModule')
        .directive('dateFormat', dateFormat);
    dateFormat.$inject = [];
    var isoUtcDateFmt = /^([0-9]{4})-([0-9]{2})-([0-9]{2})T[0\:\.]*Z?$/;
    var inputDateFmt = /^(\d{1,2})[^0-9A-Za-z](\d{1,2})[^0-9A-Za-z](\d{4,})$/;
    var dayFn = {
        1: 31,
        // Gregorian rules
        // 29 days if divisible by 4 but not 100 unless divisible by 400
        2: function (y) { return y % 4 === 0 && (% 100 !== 0 || y % 400 === 0) ? 29 : 28; },
        3: 31,
        4: 30,
        5: 31,
        6: 30,
        7: 31,
        8: 31,
        9: 30,
        10: 31,
        11: 30,
        12: 31
    };
    function loadUtc(iso) {
        if (iso === undefined || iso === null || !isoUtcDateFmt.test(iso))
            return '';
        var month, day, year;
        iso.replace(isoUtcDateFmt, function (match, y, m, d) {
            month = m * 1;
            day = d * 1;
            year = y * 1;
            return '';
        });
        return '{0}/{1}/{2}'.format(month, day, year);
    }
    function leftPad(char, len, value) {
        var s = ('' + value);
        while (s.length < len)
            s = char + s;
        return s;
    }
    function saveUtc(ymd) {
        if (ymd === null)
            return null;
        var y = leftPad('0', 4, ymd.year), m = leftPad('0', 2, ymd.month), d = leftPad('0', 2, ymd.day);
        return '{0}-{1}-{2}T00:00:00Z'.format(y, m, d);
    }
    function inputToYmd(input) {
        if (input === null || input === undefined || !inputDateFmt.test(input))
            return null;
        var month, day, year;
        input.replace(inputDateFmt, function (match, m, d, y) {
            month = m * 1;
            day = d * 1;
            year = y * 1;
            return '';
        });
        return { year: year, month: month, day: day };
    }
    function validate(ymd, maxAge, minAge) {
        if (ymd === null)
            return [null];
        var year = ymd.year, month = ymd.month, day = ymd.day;
        var errors = [];
        var maxDays = 31;
        // basic checks
        var monthValid = 1 <= month && month <= 12;
        var dayValid = false;
        // calculate max days in month
        if (monthValid) {
            var maxDaysFn = dayFn[month];
            maxDays = angular.isNumber(maxDaysFn)
                ? maxDaysFn
                : (angular.isFunction(maxDaysFn)
                    ? maxDaysFn(year)
                    : maxDays);
        }
        dayValid = 1 <= day && day <= maxDays;
        if (!monthValid)
            errors.push('Month must be 1 to 12');
        if (!dayValid)
            errors.push('Day must be 1 to {0}'.format(maxDays));
        // min/max range checking
        if (errors.length === 0) {
            var now = new Date();
            var d = new Date(now.getFullYear(), now.getMonth(), now.getDate());
            var todayTime = d.getTime();
            var todayYears = d.getFullYear();
            var minTime = d.setFullYear(todayYears - maxAge);
            var maxTime = d.setFullYear(todayYears - minAge);
            var testTime = new Date(year, month - 1, day).getTime();
            var dateValidMin = minTime <= testTime;
            var dateValidMax = testTime <= maxTime;
            if (!dateValidMin)
                errors.push('Max age is {0} years old'.format(maxAge));
            if (!dateValidMax)
                errors.push('Minimum age is {0} years old'.format(minAge));
        }
        return errors;
    }
    function dateFormat() {
        var me = this;
        return {
            scope: { dateErrors: '=' },
            require: 'ngModel',
            link: function (scope, element, attrs, ctrl) {
                var maxAge = attrs.maxAge || 1000;
                var minAge = attrs.minAge || -1000;
                //View -> Model
                ctrl.$parsers.push(function (data) {
                    if (data !== undefined && data !== null && data !== '') {
                        var ymd = inputToYmd(data);
                        var errors = validate(ymd, maxAge, minAge);
                        // set validity if possible
                        if (attrs.name)
                            ctrl.$setValidity(attrs.name, errors.length === 0);
                        // set errors if possible
                        if (scope.dateErrors)
                            scope.dateErrors = errors;
                        // send ISO date string to model
                        return saveUtc(ymd);
                    }
                    else {
                        // set errors if possible
                        if (scope.dateErrors)
                            scope.dateErrors = [];
                        return null;
                    }
                });
                //Model -> View
                ctrl.$formatters.push(function (_) {
                    var data = ctrl.$modelValue;
                    if (data !== undefined && data !== null && data !== '') {
                        var inputText = loadUtc(ctrl.$modelValue); // load from ISO date string
                        var ymd = inputToYmd(inputText);
                        var errors = validate(ymd, maxAge, minAge);
                        // set validity if possible
                        if (attrs.name)
                            ctrl.$setValidity(attrs.name, errors.length === 0);
                        // set errors if possible
                        if (scope.dateErrors)
                            scope.dateErrors = errors;
                        // send input to view
                        return inputText;
                    }
                    else {
                        // set errors if possible
                        if (scope.dateErrors)
                            scope.dateErrors = [];
                        return '';
                    }
                });
            }
        };
    }
})();

The input looks something like this:

    <input name="dob" type="text"
        required date-format max-age="110" min-age="14" date-errors="ctrl.dobErrors"
        ng-model="ctrl.dob" />
    <div ng-repeat="obj in ctrl.dobErrors track by $id(obj)">{{obj}}</div>

.NET


So when I get to the server side, I am using DateTimeOffset. The problem is that when I go to reserialize this date to JSON, the standard serializer (Newtonsoft.Json) plays dumb, only serializing to local time. Even changing the serializer settings to DateTimeZoneHandling.Utc doesn't fix it. Considering it was deserialized from UTC, the right thing would be to reserialize to UTC. JSON.NET just doesn't do the right thing here. But there is an included converter that will do the right thing with some nudging. That is link found here.


06 August 2015

Idea: Use HTML as Build Configuration

Over the past week or so, I've been trying to split out a front-end SPA application from the web services which host it. In the process, I've adopted some modern front-end tools like NPM, Bower, and Gulp. I've got it building debug and release versions (bundled and minified) of my AngularJS app.

Upon reflection on this experience, I feel that the build process using tools like Grunt and Gulp is a step backwards. I have nothing against those projects, but I feel that the process could be a lot simpler and not require learning extensive build apis/plugins/configs. (I have nothing against learning new things... in fact I love to! But the end result has to be worth it.)

My idea is that the build configuration is the source code... specifically, it is the index.html page. If you take a hard look at index.html, you realize that it is already a configuration document for your app. It glues the parts of the application together (css, scripts, images). Here is an example index.html using angular that could be run without a build step.


<!DOCTYPE html>
<html ng-app>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    <title>My App</title>

    <link rel="shortcut icon" href="assets/favicon.ico" />
    <!-- css -->
    <!-- framework -->
    <link href="../bower_components/font-awesome/css/font-awesome.min.css" rel="stylesheet" />
    <link href="../bower_components/bootswatch/lumen/bootstrap.min.css" rel="stylesheet" />
    <!-- app -->
    <link href="app.css" rel="stylesheet" />
</head>
<body>
    <!-- content -->
    <ui-view></ui-view>

    <!-- js -->
    <!-- framework -->
    <script src="../bower_components/angular/angular.min.js"></script>
    <script src="../bower_components/angular-ui-router/release/angular-ui-router.min.js"></script>
    <script src="../bower_components/angular-bootstrap/ui-bootstrap-tpls.min.js"></script>
    <!-- app -->
    <script src="app.module.js"></script>
    <script src="app.config.js"></script>
    <script src="app.routes.js"></script>
</body>
</html>


So my idea is that the build process reads this file and uses it as a configuration. For a developer build, the app could be copied as-is (since it is valid, run-able html as-is). For release, my imaginary web compiler would bundle and minify referenced items. CSS urls will also have to be resolved for things like external font assets. Then the contents would be included inline in the output index.html file. It's like compiling the application to a single executable in the desktop world. That result might look something like this (whitespace included for readability):


<!DOCTYPE html>
<html ng-app>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    <title>My App</title>

    <link rel="shortcut icon" href="assets/favicon.ico" />
    <style>/* all css here */</style>
</head>
<body>
    <!-- content -->
    <ui-view></ui-view>

    <script>/* all js here */</script>
</body>
</html>




xkcd


One issue with "compiling" this is that angular (and likely other frameworks) partial page templates are wired up in javascript code, not from HTML. But there is a convenient way to specify your templates in HTML using a $templateCache feature in angular. Just include them as script tags.


    <!-- templates -->
    <script type="text/ng-template" src="module/template.html"></script>


This is a pretty powerful convention. During dev, this has the effect of preloading the template into $templateCache. Then when your module requests this URI, it is loaded from cache instead of issuing a new GET request. During build, this would tell my imaginary web compiler where to locate the template so its contents can be included inline.

The dream here is that the process of developing the web application also defines the build process for free. I believe this is imminently possible because the main html page already serves as a configuration for the application. Some conventions may be needed so that run-able HTML can serve as build configuration. One convention might be compiling on save for TypeScript, Coffee, LESS, SASS, etc (since browsers can't process these uncompiled... yet). Another could be pre-loading templates.

Sometimes these conventions would not be desirable. And I'm sure there are other issues that could be mentioned -- like the annoyance of having to manually add references to index.html. But in the end, I think this could get you 90% of the way towards a working build setup, and a combination of IDE tooling (to auto-add references, among other things) and other build tools can be applied to get the remaining 10%. As it stands now, we are using the complex build tools to solve 100% of a problem which is 90% easy.

05 August 2015

EU cookie law is stupid

This post was going to be about something else entirely. But as I went to create a new post, I received notification that Blogger was inserting a notification on my blog (a.k.a. annoying the 3 people that read my blog) for compliance with the EU cookie law. Needless to say, this made me angry. Let me give you four reasons why this is stupid.

Nobody reads the notifications

The notifications are intended to notify you of what information of yours is kept in cookies. But history, UX research, and common sense has shown that users click through messages without reading them. They just want to read their content, and they don't care about the notice. Even if they tried to read the details, they written are by/for lawyers. So ultimately, the notifications are annoying and stupid. I can't wait until I find a plugin that blocks them. Currently, I just block them manually whenever I see one on a site.

The law doesn't protect you

The law purportedly gives the users a choice. But, are you really going to refuse FaceBook's cookies and thus not be able to use their website? I didn't think so. You will take their cookies and like it because you want to use their website. It doesn't require sites to stop using the cookies in undesirable ways. The site gets to have its way with your cookies, and if you don't like it your option is to hit the road.

No law can protect you

A serial killer obviously knows murder is illegal, but it still doesn't stop them. The law only serves as an after-the-fact counter-balance to the problem. Likewise, illicit sites don't care about the law anyway. They aren't going to care about displaying a notice. (If they did, it would probably just be a trick to get you to click to install malware so they could outright steal your personal information.) These laws only cause a burden on legitimate site operators and site users (whose browsing experience is interrupted by an asinine notice). Even legit websites can decide one day that they want to abuse your information (use it legally but unethically). Popular example: a megacorp buys an established community website so they can trade its credibility for short-term profit.

You are the only one who can protect you

Ultimately, you are the only one with the power to protect yourself. Will you wield that power? There are a myriad of plugins which block intrusions into your privacy. Start with an ad blocker, because the abuse targeted (and missed completely) by this law -- abusing your information for profit -- is the foundation of current internet ads. After the ad blocker, check out Disconnect.me, Ghostery, and so forth. (Hint: google privacy browser plugins)

Government Protip: You know nothing about the internet. Stop making laws for it.
Citizen Protip: You know everything about the internet. Stop allowing stupid laws to be made for it.