Activity Feed
- Posted Re: Splunk App for *nix - all dashboards show "no results found", but definitely ingesting from UF on All Apps and Add-ons. 02-20-2022 11:27 AM
- Karma Re: Splunk App for *nix - all dashboards show "no results found", but definitely ingesting from UF for furl. 02-20-2022 11:23 AM
- Posted Re: Create Pie Chart from JSON data on Splunk Search. 05-15-2021 10:04 PM
- Posted Re: Create Pie Chart from JSON data on Splunk Search. 05-15-2021 05:12 PM
- Karma Re: Create Pie Chart from JSON data for ITWhisperer. 05-15-2021 03:03 PM
- Posted Re: Create Pie Chart from JSON data on Splunk Search. 05-15-2021 02:12 PM
- Posted Create Pie Chart from JSON data on Splunk Search. 05-15-2021 03:42 AM
- Posted Re: Splunk App for *nix - all dashboards show "no results found", but definitely ingesting from UF on All Apps and Add-ons. 09-06-2020 03:04 AM
- Got Karma for Using Splunk Stream for Netflow- now, ingesting but how to graph?. 06-05-2020 12:49 AM
- Got Karma for Using Splunk Stream for Netflow- now, ingesting but how to graph?. 06-05-2020 12:49 AM
- Got Karma for Using Splunk Stream for Netflow- now, ingesting but how to graph?. 06-05-2020 12:49 AM
- Got Karma for Using Splunk Stream for Netflow- now, ingesting but how to graph?. 06-05-2020 12:49 AM
- Got Karma for Using Splunk Stream for Netflow- now, ingesting but how to graph?. 06-05-2020 12:49 AM
- Posted Re: Splunk_TA_nix: why are my reports showing "No results found"? on Reporting. 04-05-2020 01:52 PM
- Posted Splunk App for *nix: Why are all dashboards showing "no results found", but definitely ingesting from UF? on All Apps and Add-ons. 01-04-2020 11:08 PM
- Tagged Splunk App for *nix: Why are all dashboards showing "no results found", but definitely ingesting from UF? on All Apps and Add-ons. 01-04-2020 11:08 PM
- Tagged Splunk App for *nix: Why are all dashboards showing "no results found", but definitely ingesting from UF? on All Apps and Add-ons. 01-04-2020 11:08 PM
- Posted Re: Using Splunk Stream for Netflow- now, ingesting but how to graph? on All Apps and Add-ons. 09-24-2019 10:14 PM
- Posted zero'ing counter problem (and associated graph spike explosion) on Splunk Search. 08-28-2019 08:00 PM
- Tagged zero'ing counter problem (and associated graph spike explosion) on Splunk Search. 08-28-2019 08:00 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
5 | |||
0 | |||
0 |
02-20-2022
11:27 AM
Thanks @furl for looping back with this! Last week i actually started the move over to IT Essentials Work app after Splunk sunset’d Splunk App for *nix
... View more
05-15-2021
10:04 PM
got there with fiddling! thanks for setting me on the right path.
... View more
05-15-2021
05:12 PM
ok tried it, its darn close, but its duplicating that data set, at several levels of the JSON indenting. Any final tweaks? I can re-write my python code if needed to spit out the JSON with different names if that helps. It seems I might have to do that for the TALLY one (that also ends in -AUD) Thanks again!
... View more
05-15-2021
02:12 PM
Amazing. Thank you! So, given i already have the JSON data streaming in, i assume then all i need is the below at the tail end of my search (and sorry, it was stupid of me not to give you guys text data, sorry for the work you had to replicate it manually from the screenshot - thanks for explaining that!). re your assumption of “sum” i believe all i need actually is “last” . As in, my code (that spits out the JSON) is hitting an API that is like a balance. So im not interested in summing balanced over the search period, but rather just plotting the latest value that arrived to give me the latest balance. So in that case do i just replace “sum” in your second last line with “last”? finally, do i need the 4th last line (fields….) or is that a hangover of you having to work with my screenshot | spath path=BINNANCE-BALANCES
| spath input=BINNANCE-BALANCES
| fields - _raw _time BINNANCE-BALANCES
| fields *-AUD
| stats sum(*) as *
| transpose 0 thanks again. So impressed at turnaround time, esp on a weekend! keiran.
... View more
05-15-2021
03:42 AM
Hey Splunk Gurus! have been going in circles trying to get a query going to give me a pie chart on what I would have thought is relatively straightforward JSON data. Heres what the JSON looks like. Id like the pie chart composed of all the pink arrow field values. can someone help? thanks so much! Keiran.
... View more
Labels
- Labels:
-
stats
09-06-2020
03:04 AM
thanks @taldavita - that was *exactly* the issue (sorry for the delayed reply - I missed the notifications on this thread somehow). Thanks so much ! Enjoying my new dashboards now ...
... View more
04-05-2020
01:52 PM
Did you solve this @ljalvrdz - im having an identical issue. have tried across splunk enterprise 7.1.2 and over the weekend upgraded to 8.0.3, but still no joy. Any help appreciated, thanks!
... View more
01-04-2020
11:08 PM
Hi Guys,
ive installed the Splunk App for *nix on my S.H, but all dashboards within the app are coming up "no results found".
Ive followed the install doco [ https://docs.splunk.com/Documentation/UnixApp/5.2.5/User/DeploytheSplunkAppforUnixandLinuxinadistributedSplunkenvironment ] to the letter... in my case: 1/ on the S.H/ & INDEXER (combined in my case): installed the APP and the TA add-on 2/ on the linux host i want to monitor: installed the UF and the TA add-on and configured the inputs.conf to start gathering as per snapshot here:
keiran@vm-untrust:/opt/splunkforwarder/etc/apps/Splunk_TA_nix/local$ cat inputs.conf
# Copyright (C) 2019 Splunk Inc. All Rights Reserved.
[script://./bin/vmstat.sh]
interval = 60
sourcetype = vmstat
source = vmstat
disabled = 0
[script://./bin/iostat.sh]
interval = 60
sourcetype = iostat
source = iostat
disabled = 0
[script://./bin/nfsiostat.sh]
interval = 60
sourcetype = nfsiostat
source = nfsiostat
disabled = 0
[script://./bin/ps.sh]
interval = 30
sourcetype = ps
source = ps
disabled = 0
[script://./bin/top.sh]
interval = 60
sourcetype = top
source = top
disabled = 0
[script://./bin/netstat.sh]
interval = 60
sourcetype = netstat
source = netstat
disabled = 0
I have confirmed data is coming in at the indexer / search head from the linux box i want to monitor, and the 'interesting fields' seem to be pulling an awful lot of data back.... so why arent the dashboards working?:
not quite sure where to start t/shooting this, so any help most appreciated!! thanks team! K.
... View more
Labels
- Labels:
-
dashboard
-
troubleshooting
09-24-2019
10:14 PM
@nikhilafedex - ingesting is the easy part: setup a new (settings -> data inputs -> UDP..... new listener on 2055) + an obviously matching config on your network devices pointing to your splunk instance (on udp 2055).
Then the real work begins - making dashboards fromn the ingested data... i never found time to loo p back on this. Wanted to. Just not managed to.
... View more
08-28-2019
08:00 PM
Hi Splunk gurus.
I have a query problem thats been challenging me for a while.
When my polling breaks, or when counters reset to zero for whatever reason (i.e. the device i'm polling is rebooted) i get a situation like this (red shading = condition when broken, green = when polling resumes properly):
Soi basically get a HUUUUUUGE spike in my graphs which destroys the rest of the fidelity on the Y-axis scale. As so:
Any ideas how i can solve this condition at the splunk search / SPL layer? I dont believe ill be able to ever fix it at the device layer, so will need the dashboards to handle the condition and work around it somehow. Im sure im not the first to solve this problem, so didnt want to re-invent the wheel (a quick search of the forums couldnt help me).
heres my SPL for anyone that wants to copy/paste to give me a hand!.
Thanks all!
sourcetype=_json source="/Applications/Splunk/etc/apps/_kapp/bin/_KNETWORK/getPFSENSEstats.py"
| streamstats current=t global=f window=2
earliest(vtnet0BytesInPass) as lastBytesIn
latest(vtnet0BytesInPass) as currentBytesIn
earliest(vtnet0BytesOutPass) as lastBytesOut
latest(vtnet0BytesOutPass) as currentBytesOut
| eval mbpsIn =(currentBytesIn - lastBytesIn )*8/1024/1024/60
| eval mbpsOut =(currentBytesOut - lastBytesOut)*8/1024/1024/60
... View more
06-20-2019
03:56 AM
Hi @akg2019 how did you get on with this? I haven’t had a chance to circle back yet. Keen to understand how you got on.
... View more
05-21-2019
02:25 AM
sounds good to me! we may have to tweak the logic when you get a graph going based on some of my assumptions (for instance ive been thinking in the netflow bytes_in field is that bytes since last update, or cumulate to that point - like the TCP seqnumber). If you want some same live data, i could look into exporting some for you.
... View more
05-20-2019
05:21 PM
^ and im not sure if you missed this comment @DavidHourani
Does my logic make sense, and if so do your search query's cover it?
i will test in thew weekend.
... View more
05-20-2019
05:18 PM
Hi @akg2019 - i think your problem / question around sFlow is a more fundamental one. Im a long time network engineer so i might be able to shed light on the different datasets.
Netflow is an accurate measure of traffic (bits)- actually it was/still is used for many billing systems for instance to track what customer consumed what data.
sFlow on the other hand is not. The clue is in the name - its sampled. Its evolution was driven by faster network kit, where netflow (tracking bit count on EVERY session would flatten the CPU of the router). So in sFlow, every so often, might be one packet in 10,000, the sflow process will wake up, peek in at that packet transiting the box at that time and report on that packet, then the process will disappear... and reappear again to check in at the next sampled interval (another 10,000 packets). The logic is that the sampling will be able to roughly report on the transiting traffic.... as big / long-running / high-bandwidth sessions are more likely to be hit upon by the sampling.
In sFlow, I dont believe the concept of sessions (src_ip + src_port + dst_ip + dst_port) are tracked, which is necessary for the router to keep track of incrementing bit count like it does in netflow... so the fact that bytes_in field is not present in sFlow makes perfect sense to me.
In saying that i know solarwinds etc have developed an interpretation of this data. Graphing it doesnt make sense to me given the above. What does it look like? Ahh just checked, seems its tabular reporting (see https://www.solarwinds.com/topics/sflow-collector) which does make more sense. Those tables do have byte count though hmmm... how wuld they get that.... (checking your sflow packet sample now)....
OK there is a seqnumber field - and in TCP at least thats an incremtal count of the bytes transferred so far, and is included in EVERY packet as a running total. So i guess thats how they do it, and thats what you likely need to report on. But that only exists in TCP (UDP for instance does not have this).
hope this helps.
As for me, i will likely have time to mess around with this stuff on the wekeend.
Ever grateful for your assistance here @DavidHourani
thanks
Keiran.
... View more
05-20-2019
05:30 AM
Hi guys, bitrate is simple. Its just: end time minus start time which gives you duration. In this case you derive this from successive events where the flow details match
. Lets say this is 10secs. Then the bits side - if you have a delta of say 1MB (again just minus earlierbit count from later bit count) in that 10 seconds, you just divide by 10 to get the avg bitrate per second... so 100kbps in this example
... View more
05-20-2019
02:45 AM
Hi @DavidHourani - really appreciate your assistance here.... attached is a screenshot of some sample data thats as good as any other. Let me know if you need an actual export.
Basically (if you didnt know about netflow) the router sends periodical "flow records" back to a reciever - in this case splunk (FYI - each data packet can contain many flow records, and splunk pulls them out as an event per record).... so its a snapshot into what the routers session table is at that moment, inclusive of byte count for those transiting sessions. So if you have a long running TCP session to a DB server for instance, at minute one, it will have a byte count (bytes_in/out) of say 100.... check back 1 minute later, it migth have a byte count of say 1000, indicating 900 more bytes in that last minute.
I think the search logic needs to
- group like flows by TCP/UDP sessions which is (src_ip + dst_ip + src_port + dest_port).....
- graphing bytes over time.
- And then grabbing only say the top 10 flows by byte count.
Check the original post for the kind of flow data visualisation over time we are hoping for.
... View more
05-20-2019
01:02 AM
Hi all, i havent had time to look at this further. My splunk is still ingesting loads of netflow, but i havent started dev on the SPL. Seems lots of people looking for this. @DavidHourani has specifically asked for a new question to be asked on a new post, not quite sure why, its still the same dev problem we need solved, but regardless happy to follow the new thread, just pls link us in here @akg2019 so we know where to follow. Thanks guys!
... View more
01-04-2019
10:13 PM
Just circling back an updating here. Couldn't get the above to work.
So i fixed it by re-writing my code to structure the JSON in a way that splunk would inherently split it, without any tweaking of props.conf.
heres what it spits out now, and splu7nk is parsing, just fine:
{
"BOMPREDapiPollTime": 1546140373.072095,
"BOMPREDdate": "2018-12-30T11:44:43+11:00",
"BOMPREDdescBrief": "Mostly sunny.",
"BOMPREDdescDetail": "Hot and mostly sunny. Winds west to northwesterly 15 to 20 km/h shifting east to northeasterly 15 to 25 km/h in the late morning and early afternoon then becoming light in the late evening.",
"BOMPREDfireDanger": "Very High",
"BOMPREDiconCode": 3,
"BOMPREDrainChance": 10,
"BOMPREDtempMax": 32,
"BOMPREDuvAlert": "Sun protection 8:30am to 5:20pm, UV Index predicted to reach 13 [Extreme]"
}
{
"BOMPREDapiPollTime": 1546140373.072095,
"BOMPREDdate": "2018-12-31T00:00:00+11:00",
"BOMPREDdescBrief": "Shower or two.",
"BOMPREDdescDetail": "Partly cloudy. Medium (50%) chance of showers, most likely in the evening. The chance of a thunderstorm in the afternoon and evening. Light winds becoming northeasterly 15 to 20 km/h in the evening then becoming light in the late evening.",
"BOMPREDiconCode": 11,
"BOMPREDrainChance": 50,
"BOMPREDrainMM": "0 to 1 mm",
"BOMPREDtempMax": 29,
"BOMPREDtempMin": 22
}
{
"BOMPREDapiPollTime": 1546140373.072095,
"BOMPREDdate": "2019-01-01T00:00:00+11:00",
"BOMPREDdescBrief": "Mostly sunny.",
"BOMPREDdescDetail": "Partly cloudy. Slight (20%) chance of a shower. The chance of a thunderstorm in the morning. Light winds becoming northeasterly 15 to 20 km/h during the day then becoming light during the afternoon.",
"BOMPREDiconCode": 3,
"BOMPREDrainChance": 20,
"BOMPREDtempMax": 30,
"BOMPREDtempMin": 23
}
... View more
01-01-2019
05:50 PM
thanks!! that worked. Never used eventstats before - good to know!
... View more
12-29-2018
09:54 PM
Hi thanks for your help, but sorry if i didnt explian well.... i dont need just a single record.... each time the script runs, it generates 7 JSON events. And the table needs all 7, but only the latest 7. Each batch of 7 JSON events share the same API poll epoch time. Hopefully that clears up things?
... View more
12-29-2018
08:37 PM
Hi guys,
i need help with a search. I believe it's a subsearch that i need (I need a variable output of one search to feed another search), but I cant make it work.
Basically, i have written code that polls a weather forecast API and spits back JSON, which Splunk gobbles up. Trouble is, the API call is made several times a day, which means i get several, duplicate predictions in my data set. I only want to take the latest data, and ignore all previous.
Here is my search which works well in giving me the table i need, when i have a clean index (i.e. only one API poll has been ingested thus far):
sourcetype=_jsonFUTURE BOMPREDdate | eval Day = strftime(_time,"%a") | eval Date = strftime(_time,"%F") | sort _time | table Day, Date, BOMPREDrainChance, BOMPREDrainMM, BOMPREDdescBrief, BOMPREDdescDetail | rename BOMPREDrainChance as "Rain%", BOMPREDrainMM as "RainMM", BOMPREDdescBrief as "ForecastBrief", BOMPREDdescDetail as "ForecastDetail"
but when the API poll script has run twice, for instance, the table now has duplicates as shown below:
In my JSON data set, i have now included a field ive called 'BOMPREDapiPollTime' which is an epoch time that the script was executed...so the 7 JSON events that get ingested each time the script is run, all share the same value 'BOMPREDapiPollTime' as shown below.
So, all i believe i need to do is:
a) find that latest timestamp of 'BOMPREDapiPollTime' - which i can do with the search 'sourcetype=_jsonFUTURE BOMPREDdate | stats latest(BOMPREDapiPollTime) as pollTime'
b) feed that into my working search (pictured above) - i believe with a subsearch...
I have tried variants of the below without luck:
sourcetype=_jsonFUTURE BOMPREDdate [search sourcetype=_jsonFUTURE BOMPREDdate | stats latest(BOMPREDapiPollTime) as pollTime] | eval Day = strftime(_time,"%a") | eval Date = strftime(_time,"%F") | sort _time | table Day, Date, BOMPREDrainChance, BOMPREDrainMM, BOMPREDdescBrief, BOMPREDdescDetail | rename BOMPREDrainChance as "Rain%", BOMPREDrainMM as "RainMM", BOMPREDdescBrief as "ForecastBrief", BOMPREDdescDetail as "ForecastDetail"
BUT I cant make it work! (i always get zero results).
Any help would be greatly appreciated.
I'm sure it's something stupid I'm doing.
thanks in advance guys!
Keiran.
... View more
12-29-2018
02:54 AM
Ill give it a go. So is the main difference with your suggestion is use BREAK_ONLY_BEFORE rather than MUST_BREAK_AFTER i guess. Whats the logic there? Ill let you know as soon as i can retest thanks, keiran.
... View more
12-28-2018
08:12 PM
Hi guru's:
i have JSON data that looks like the below.
{
"BOMxmlDlTime": 0.6584670543670654,
"TODAY-PLUS-0": {
"BOMPREDdate": "2018-12-29",
"BOMPREDdescBrief": "Sunny.",
"BOMPREDfireDanger": "Very High",
"BOMPREDiconCode": 1,
"BOMPREDrainChance": 0,
"BOMPREDtempMax": 30
},
"TODAY-PLUS-1": {
"BOMPREDdate": "2018-12-30",
"BOMPREDdescBrief": "Mostly sunny.",
"BOMPREDiconCode": 3,
"BOMPREDrainChance": 5,
"BOMPREDtempMax": 31,
"BOMPREDtempMin": 22
},
"TODAY-PLUS-2": {
"BOMPREDdate": "2018-12-31",
"BOMPREDdescBrief": "Possible shower.",
"BOMPREDiconCode": 17,
"BOMPREDrainChance": 40,
"BOMPREDtempMax": 31,
"BOMPREDtempMin": 22
},
"kCryptoDictType": "BOMpredictions"
}
I want to split each chunk of "TODAY-PLUS-X: { xxxxxx }," into its own event. I've been reading and attempting various things for the last few hours and nothing I can seem to do allows me to do this. The obvious place to split it would be at the line with:
},
... so ive been playing with various combinations of MUST_BREAK_AFTER with regex in props.conf to do this, but nothing seems to make the event split. Here's my current props.conf (in the /etc/system/local/ directory, but have also tried in the /etc/app/xxxxx/local). And, I've been doing a full Splunk restart each time I edit props.conf if anyone was wondering. The last 5 lines i think are bits I've been manually editing (the first bits are auto created from the GUI when o created my new _jsonFUTURE sourcetype
[_jsonFUTURE]
INDEXED_EXTRACTIONS = json
KV_MODE = none
NO_BINARY_CHECK = true
category = Structured
description = K - required by BOM predictions data (needs special event splitting and date extraction logic)
disabled = false
pulldown_type = 1
BREAK_ONLY_BEFORE_DATE = false
MUST_BREAK_AFTER = \},
TIME_PREFIX = \"BOMPREDdate\":\s+\"
TIME_FORMAT = %y-%m-%d
MAX_DAYS_HENCE = 7
Can anyone please tell me what I'm doing wrong? I'm pulling my hair out !!!!
Thanks in advance guys,
K.
... View more
06-15-2018
05:38 AM
Giving this a nudge so it bubbles up again for some viewers who can help!
... View more
06-10-2018
03:51 PM
5 Karma
Hi splunk gurus!
Long weekend here in Australia and i thought id finally get around to ticking something off my wish list: netflow my home network.
So ive got a cisco adsl router thats successfully streaming netflow to my splunk box (verified first with tcpdump). At the splunk side, i started off down one path (“Netflow Analytics” until i realised you had to pay, a lot, for that!)... then some searching in here pointed me to “splunk stream”, which seems robust, is free, now installed, and happily gobbling up my netflow stream! See attached photo.
Which brings me to the fun part (and my question). Where can i find some pre-canned SPL to start plotting my traffic on pretty graphs? The Stream UI doesnt look to be setup for this. I know i could start to write myself but its a relatively complex dataset, and surely this has been done lots before, so i shouldnt have to reinvent the wheel. So if anyone can point me at some SPL (or an app!) that would be great!
Thanks in advance all.
Keiran.
PS- this is the sort of graph I'm hoping to create (from the paid app - https://splunkbase.splunk.com/app/489):
... View more