I need to get my search to identify the utilization of devices, and then categorize them into buckets of utilization at day=0 i.e. there are 10 buckets with utilization 0-10%; 10%-20%;...;90%-100% (e.g. <10%; 10%-20%;....;90%-100%).
So, within each utilization bucket, there are multiple devices (for example the bucket of 0%-10% utilization might have 56 devices (recognized by "deviceId"; but the 10%-20% utilization bucket may have 120 devices).
And, each device has multiple switches (with their own macAddresses) that are "Up" or "Down" ; where "utilization" per device is equal to the number of "Up" switches divided by the total of number of switches per device.
Once I have the device numbers for the devices in each utilization bucket at day=0; I am then going to run another search to get their utilization on every other subsequent day so that I can graph the increase in utilization for devices at each stage of utilization / utilization bucket. There are for example >200 days to run.
Search: Here is my current search (which does not work) (the subsearch by deviceType is necessary)
index="xyz" source="uvw" earliest=06/11/2016:00:00:00 latest=06/11/2016:23:59:59
[ search source="efg" index="xyz" deviceType="Big" timeStamp="2016-06-11*"
| dedup deviceId
| table deviceId]
| dedup macAddress
| eval downps = stats dc(macAddress) by deviceId,operStatus="Down"
| eval upps = stats dc(macAddress) by deviceId,operStatus="Up"
| eval totalps = downps+upps
| eval utilization=upps/totalps
| stats count by deviceId,util<"0.1"
| outputlookup zerotenbucket.csv
This gives me an Error in 'eval' command: The expression is malformed. Expected )."
, however, the method itself might be wrong and can be improved?
I would then run something like this?
Then run something like this:
index="xyz" source="uvw"
| lookup "zerotenbucket.csv" deviceId
| bin_time span=1d
| stats dc(macAddress) by deviceId,_time,operStatus
| sort _time
Alternatively, it would also work if this output only gave me the utilization per device rather than |stats dc(macAddress) by deviceId,_time, operStatus
; but I need to correct the .csv file generating query first.
I may also run into an export file size limit of more than 10k rows?
Try this for day0
index="xyz" source="uvw" earliest=06/11/2016:00:00:00 latest=06/11/2016:23:59:59
[ search source="efg" index="xyz" deviceType="Big" timeStamp="2016-06-11*"
| dedup deviceId
| table deviceId]
| dedup macAddress
| eval device_mac = deviceId."#".macAddress
| chart count over device_mac by operStatus
| addtotals
| eval utilization=Up/Total
| where utilization<=0.1
| rex field=device_mac "(?<deviceId>[^\#]+)\#(?<macAddress>.*)"
| table deviceId macAddress utilization
| outputlookup zerotenbucket.csv
And this for subsequent days
index="64581-np" source=*interfaces* macAddress!=null macAddress!=000000000000 interfaceName!="MgmtEth*" interfaceName!="BVI*" interfaceName!="PTP*"
| lookup "zerotenbucket.csv" deviceId
| bin _time span=1d
| eval device_mac = _time."#".deviceId."#".macAddress
| chart count over device_mac by operStatus
| addtotals
| eval utilization=Up/Total
| rex field=device_mac "(?<time>[^\#]+)\#(?<deviceId>[^\#]+)\#(?<macAddress>.*)"
| table time deviceId macAddress utilization
| sort _time
Try this for day0
index="xyz" source="uvw" earliest=06/11/2016:00:00:00 latest=06/11/2016:23:59:59
[ search source="efg" index="xyz" deviceType="Big" timeStamp="2016-06-11*"
| dedup deviceId
| table deviceId]
| dedup macAddress
| eval device_mac = deviceId."#".macAddress
| chart count over device_mac by operStatus
| addtotals
| eval utilization=Up/Total
| where utilization<=0.1
| rex field=device_mac "(?<deviceId>[^\#]+)\#(?<macAddress>.*)"
| table deviceId macAddress utilization
| outputlookup zerotenbucket.csv
And this for subsequent days
index="64581-np" source=*interfaces* macAddress!=null macAddress!=000000000000 interfaceName!="MgmtEth*" interfaceName!="BVI*" interfaceName!="PTP*"
| lookup "zerotenbucket.csv" deviceId
| bin _time span=1d
| eval device_mac = _time."#".deviceId."#".macAddress
| chart count over device_mac by operStatus
| addtotals
| eval utilization=Up/Total
| rex field=device_mac "(?<time>[^\#]+)\#(?<deviceId>[^\#]+)\#(?<macAddress>.*)"
| table time deviceId macAddress utilization
| sort _time
I resolved the first part like this:
....
[ search source=*devices* index="64581-*" productFamily="*ASR 9000*" timeStamp="2016-04-22*"
| dedup deviceId
| table deviceId]
| dedup macAddress
| chart count over deviceId by operStatus
| eval utilization=Up/(Up+Down)
| where utilization<=0.1
| table deviceId utilization
| outputlookup zerotenbucket.csv
However, please can you take a look at my second part, I can't get it to work?:
index="xyz" source="uvw"
| lookup zerotenbucket.csv deviceId
| bin _time span=1d
| dedup time,macAddress
| chart count over deviceId by operStatus
| eval utilization=Up/(Up+Down)
| table time deviceId, utilization
Try this
index="xyz" source="uvw"
| lookup zerotenbucket.csv deviceId
| bin _time span=1d as time
| dedup time, macAddress
| eval time_device=time."#".deviceId
| chart count over time_device by operStatus
| eval utilization=Up/(Up+Down)
| rex field=time_device "(?<time>[^\#]+)\#(?<deviceId>.*)"
| table time deviceId, utilization
Doesn't work - gives me the utilization for every deviceId; not just the ones in zerotenbucket.csv...
Also, how would I convert time back to standard mm/dd/yy from epoch?
🙂 I forgot that part. Try this
index="xyz" source="uvw" [| inputlookup zerotenbucket.csv deviceId | table deviceId]
| bin _time span=1d as time
| dedup time, macAddress
| eval time_device=time."#".deviceId
| chart count over time_device by operStatus
| eval utilization=Up/(Up+Down)
| rex field=time_device "(?<time>[^\#]+)\#(?<deviceId>.*)"
| table time deviceId, utilization
This works; any suggestion on converting back to mm/dd/yy from epoch?
I've tried, a few ways that don't work:
| fieldformat Time=strftime(time,"%T")
and
| eval Time=strftime(Time, "%m/%d/%Y %H:%M:%S")
Which don't seem to work?
Got it:
| eval h_time=strftime(time,"%F %T")