All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey I have the following query:   ``` | makeresults | eval prediction_str_body="[{'stringOutput':'Alpha','doubleOutput':0.52},{'stringOutput':'Beta','doubleOutput':0.48}]" ```   But no matter w... See more...
Hey I have the following query:   ``` | makeresults | eval prediction_str_body="[{'stringOutput':'Alpha','doubleOutput':0.52},{'stringOutput':'Beta','doubleOutput':0.48}]" ```   But no matter what I do, I can't seem to extract each element of the list and turn it into it's own event. I'd ideally like a table afterwards of the sum of each value: Alpha: 0.52 Beta: 0.48 For all rows. Thanks!
I assume that Splunk already gives you msg as a field.  You can then use extract on it. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" | rename msg as _raw | extract |... See more...
I assume that Splunk already gives you msg as a field.  You can then use extract on it. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" | rename msg as _raw | extract | search AppBody=SOR_RESP DestId=EIW | table SrcApp, SubApp, RefMsgNm, DestId, MsgNm | fillnull value=NA SubApp | top SrcApp, SubApp, RefMsgNm, DestId, MsgNm limit=100 | rename SrcApp as Channel, SubApp as "Sub Application", RefMsgNm as Message, DestId as SOR, MsgNm as "SOR Message" | fields Channel, "Sub Application", Message,SOR,"SOR Message",count | sort Channel,"Sub Application", Message,SOR, "SOR Message", count (As your new source is JSON, overriding _raw should be fine.)  Hope this helps.
Here's another way to find those transaction - replace transaction with this | streamstats global=f reset_after="state=1" range(_time) as duration list(_raw) as events count as eventcount by zone | ... See more...
Here's another way to find those transaction - replace transaction with this | streamstats global=f reset_after="state=1" range(_time) as duration list(_raw) as events count as eventcount by zone | where state=1 | table _time events zone state duration eventcount  
Thanks a lot for the response @gcusello , it works.
So, it seems like your zones repeat themselves. Here is an example of using your data. You can paste this example into your search | makeresults | eval x=split("2023-09-18 11:22:05.9145992, E7F93BB... See more...
So, it seems like your zones repeat themselves. Here is an example of using your data. You can paste this example into your search | makeresults | eval x=split("2023-09-18 11:22:05.9145992, E7F93BB1-608A-4D2F-AF34-0ED1AB279A65, AUR MCPA Alarm 16,2, Full; Bins East; Level 1; Divert Row 057; Zone 113,1,0,192###2023-09-18 11:31:35.7205659, 2C8701D0-7B9D-4F99-8679-A4F3F98086C9, AUR MCPA Alarm 16,2, Full; Bins East; Level 1; Divert Row 057; Zone 113,0,0,192###2023-09-18 11:36:24.1803900, 0C07C755-C59B-4E9F-92A6-E60EC1790E00, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,0,0,192###2023-09-18 12:00:27.1437935, 0BE15F46-AA1E-46D2-97FF-5E8F68EC4415, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,1,0,192###2023-09-18 12:00:37.1563574, 67E5E8C7-3D36-41C9-9062-F71AF3481012, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,0,0,192###2023-09-18 12:00:47.1724708, 39C5326A-B2B6-478A-9756-8FAD049074C9, AUR MCPA Alarm 13,2, Full; Bins East; Level 1; Divert Row 227; Zone 122,1,0,192###2023-09-18 12:00:55.1835517, 7C060FE4-3441-4BEB-AFFE-97D8E0E5F324, AUR MCPA Alarm 13,2, Full; Bins East; Level 1; Divert Row 227; Zone 122,0,0,192###2023-09-18 12:03:27.3790874, B40D0D99-8E60-4AC8-8F34-2DA037945463, AUR MCPA Alarm 24,2, Full; Bins East; Level 1; Divert Row 121; Zone 117,1,0,192###2023-09-18 12:03:31.3853304, B72D54D5-B7B8-4928-83D2-DF64FAAD52BD, AUR MCPA Alarm 24,2, Full; Bins East; Level 1; Divert Row 121; Zone 117,0,0,192###2023-09-18 12:11:28.9249859, 3323D5D6-98BE-4867-86D9-7068225C44E6, AUR MCPA Alarm 19,2, Full; Bins East; Level 1; Divert Row 095; Zone 116,1,0,192###2023-09-18 12:11:32.9266932, 32C54B9A-03E1-4E70-9F6E-F34FF4D4EF8D, AUR MCPA Alarm 19,2, Full; Bins East; Level 1; Divert Row 095; Zone 116,0,0,192###2023-09-18 12:20:34.8242708, 1231E232-07F7-40F6-8CC0-23A80D9693DA, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,1,0,192###2023-09-18 12:21:01.8614482, D807C593-5F41-44F3-9BEA-601BCEA45A96, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,0,0,192###2023-09-18 12:41:58.6150128, 04A9F0AC-34E2-4514-9301-E607F5B90DBB, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,1,0,192###2023-09-18 12:42:16.6309373, DAF119E7-8BE5-4B14-AF98-EC34F52CF343, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,0,0,192###2023-09-18 12:45:56.3032344, CF2988F9-7354-4C6F-A320-ED50AF43F149, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,1,0,192###2023-09-18 12:48:22.3814934, F12CAAFE-8861-40A5-8763-EDF02C25722F, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,0,0,192###2023-09-18 12:49:10.4169289, C72DB2E5-A7E6-471C-8BAC-280A91E28338, AUR MCPA Alarm 14,2, Full; Bins East; Level 1; Divert Row 223; Zone 121,1,0,192###2023-09-18 12:53:18.5610031, 4C8CAF70-1A73-4318-A0DF-B42F76352277, AUR MCPA Alarm 18,2, Full; Bins East; Level 1; Divert Row 257; Zone 123,1,0,192###2023-09-18 12:53:56.5822544, 9D2E9472-7FCF-4266-A7C5-76942F4E9D71, AUR MCPA Alarm 18,2, Full; Bins East; Level 1; Divert Row 257; Zone 123,0,0,192###2023-09-18 12:57:56.9627790, CC8B059B-5A4F-46CE-9CB2-0E6F98E95A1B, AUR MCPA Alarm 13,2, Full; Bins East; Level 1; Divert Row 227; Zone 122,1,0,192###2023-09-18 13:01:11.2381480, ECC5639E-14DA-4067-9874-DAC23B56F50A, AUR MCPA Alarm 13,2, Full; Bins East; Level 1; Divert Row 227; Zone 122,0,0,192", "###") | mvexpand x | rename x as _raw | eval _time=strptime(_raw, "%F %T.%Q") | sort - _time | fields _time _raw ``` The above creates your data set ``` ``` Extract the zone and state ``` | rex "Zone (?<zone>\d+),(?<state>\d)" ``` Now look for 2 events per transaction ``` | transaction maxevents=2 zone startswith=eval(state=1) endswith=eval(state=0) If you set up a field extraction to extract zone and state automatically, you can then search for zone=X or zone=Y in the search and then the transaction command is simple. Note that transaction has limitations and the "length" of your transactions is quite long, so you should look at using some kind of stats to evaluate these.
You selected lookup as label, but are using inputlookup .  You would have the answer if you stick to lookup. index="web_index" | lookup URLs.csv kurl as url output kurl as match | eval match = if(... See more...
You selected lookup as label, but are using inputlookup .  You would have the answer if you stick to lookup. index="web_index" | lookup URLs.csv kurl as url output kurl as match | eval match = if(isnull(match), 0, 1) | stats sum(match) as count by url  
Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours... See more...
Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours. When performing a lookup on CSV file from this index, only 2 IPs matches, but every time a search is performed in dashboard, it compares 100k IPs with 2 IPs. How do we pre-calculate the search and store the data, so every time a search is performed on a dashboard, it only search for the historical data and it does not have to compare 100k IPs with IPs? Thank you in advanced for your help | index=vulnerability_index | table ip_address, vulnerability, score ip_address         vulnerability                        score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting       8 192.168.1.2 DNS 5 x.x.x.x   ... total IP:100k       company.csv ip_address       company location 192.168.1.1 Comp-A        Loc-A 192.168.1.2 Comp-B Loc-B   | lookup company.csv ip_address as ip_address OUTPUTNEW ip_address, company, location ip_address vulnerability score company location 192.168.1.1 SQL Injection 9 Comp-A Loc-A 192.168.1.1 OpenSSL 7 Comp-A Loc-A 192.168.1.2 Cross Site-Scripting 8 Comp-B Loc-B 192.168.1.2 DNS 5 Comp-B Loc-B  
Command history is logged in LOCAL7 facility, NOTICE level.  You may want to examine /etc/rsyslog.conf (and related conf files) to find out which log file(s) contain local7.notice. According to http... See more...
Command history is logged in LOCAL7 facility, NOTICE level.  You may want to examine /etc/rsyslog.conf (and related conf files) to find out which log file(s) contain local7.notice. According to https://github.com/rsyslog/rsyslog/blob/master/platform/redhat/rsyslog.conf, RedHat default is to send local7.* into /var/log/boot.log.  But your system may have customized settings.  Normally, /var/log/secure is used for authpriv.*, thus it does not contain command history. If the file that contains local7.notice is not ingested, you will need to ingest it. Hope this helps.
That's just setting up dummy data for the example. The mvmap just concatenates ValueX with R.# to make each of the elements of C1 show the value + row number. foreach just makes field C# equal to a ... See more...
That's just setting up dummy data for the example. The mvmap just concatenates ValueX with R.# to make each of the elements of C1 show the value + row number. foreach just makes field C# equal to a random number, where # is a loop from 2, 3, 4 in the foreach.
This problem appears to have been resolved. The confusing AppInspect test error I described before was returned when using AppInspect 2.37.0, but now I see some patch releases have been made and the ... See more...
This problem appears to have been resolved. The confusing AppInspect test error I described before was returned when using AppInspect 2.37.0, but now I see some patch releases have been made and the test passes when using AppInspect 2.37.2.
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder a... See more...
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder and the indexers but still can't make it work. Logs are coming in like this:   Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347491321753 tz="-0500" logid="0001000014" type="traffic" subtype="local" level="notice" vd="vdom1" srcip=xx.xx.xx.xx srcport=3465 srcintf="wan_1" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="client" dstintfrole="undefined" srccountry="Netherlands" dstcountry="Peru" sessionid=1290227282 proto=6 action="close" policyid=0 policytype="local-in-policy" service="HTTPS" trandisp="noop" app="HTTPS" duration=9 sentbyte=1277 rcvdbyte=8294 sentpkt=11 rcvdpkt=12 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347381319603 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=143.137.146.130 srcport=33550 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="United States" sessionid=1290232934 proto=6 action="close" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=2 sentbyte=2370 rcvdbyte=5826 sentpkt=12 rcvdpkt=11 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347443046437 tz="-0500" logid="0000000020" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=52777 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="Peru" sessionid=1289825875 proto=6 action="accept" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=500 sentbyte=1517 rcvdbyte=1172 sentpkt=8 rcvdpkt=7 appcat="unscanned" sentdelta=1517 rcvddelta=1172 Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347481317830 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=18191 srcintf="3050" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=443 dstintf="wan_2" dstintfrole="undefined" srccountry="Peru" dstcountry="Peru" sessionid=1290224387 proto=6 action="timeout" policyid=21 policytype="policy" poluuid="ab285ae0-3d5a-51ee-dce1-3f4aec1e32dc" policyname="PUBLICACION_VDOM" service="HTTPS" trandisp="noop" duration=13 sentbyte=180 rcvdbyte=0 sentpkt=3 rcvdpkt=0 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_2" devid="fortigate_id_2" eventtime=1695157346792901761 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom3" srcip=xx.xx.xx.xx srcport=47767 srcintf="3006" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=8580 dstintf="wan_2" dstintfrole="undefined" srccountry="United States" dstcountry="Peru" sessionid=3499129086 proto=6 action="timeout" policyid=18 policytype="policy" poluuid="9cba23b2-3dfa-51ee-847f-49862ff000c0" policyname="PUBLICACION_VDOM" service="tcp/8580" trandisp="noop" duration=10 sentbyte=40 rcvdbyte=0 sentpkt=1 rcvdpkt=0 appcat="unscanned" srchwvendor="Cisco" devtype="Router" mastersrcmac="xxxxxxxxxxxxxxx" srcmac="xxxxxxxxxxxxxxx" srcserver=0   And the configuration I added into props.conf is the following:   [host::host_ip] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\w{3}\s+\d{1,2}\s\d{2}\:\d{2}\:\d{2}) TIME_PREFIX = eventtime= TIME_FORMAT = %b %d %H:%M:%S   The format is similar to the configuration applied to similar sources so I can't figure out why it isn't working. I'd appreciate any kind of insight you guys could bring. Thanks in advance!    
Sometimes the fix is right there in the documentation itself: https://docs.splunk.com/Documentation/AddOns/released/AWS/Troubleshooting   I fixed the issue by updating splunk-launch.conf file and ... See more...
Sometimes the fix is right there in the documentation itself: https://docs.splunk.com/Documentation/AddOns/released/AWS/Troubleshooting   I fixed the issue by updating splunk-launch.conf file and adding the custom PORT for Management. The latest version of Aws add on doesn't work with custom management port. It only works on 8089.  
Hi Rich,   Brilliant!, thank you thank worked.  
This worked for me on version 8.x recently. Thanks.
The lack of props for the sourcetype is not a good thing.  It means Splunk is guessing about how to interpret the data and may be guessing wrong.  Perhaps Splunk Cloud makes different assumptions abo... See more...
The lack of props for the sourcetype is not a good thing.  It means Splunk is guessing about how to interpret the data and may be guessing wrong.  Perhaps Splunk Cloud makes different assumptions about the data than Splunk Enterprise does. Create an app with good props.conf settings for the sourcetype and install the app in both environments.  That should fix it.
I finally solved the problem by replacing the semicolon  with its URL encoding (%3b) inside the curl command: curl ... -d search="search ... | makemv delim=\"%3b\" hashes | ..." -d output_mode=csv ... See more...
I finally solved the problem by replacing the semicolon  with its URL encoding (%3b) inside the curl command: curl ... -d search="search ... | makemv delim=\"%3b\" hashes | ..." -d output_mode=csv  
To include the time component, we don't need to extract anything.  Just treat StartDateTime as a floating point number. | makeresults | eval StartDateTime="59025.5249306" | eval time=(StartTime-40... See more...
To include the time component, we don't need to extract anything.  Just treat StartDateTime as a floating point number. | makeresults | eval StartDateTime="59025.5249306" | eval time=(StartTime-40587) * 86400 | eval humanTime=strftime(time, "%c")
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the user... See more...
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the username and password are wrong 09-19-2023 21:59:46.953 +0300 WARN FilesystemChangeWatcher [10812 MainTailingThread] - error getting attributes of path "\\192.168.1.142\df\InvalidPasswordAttempts.log": The user name or password is incorrect. although i can access the shared folder with a browser on the UF host with no problems  by the way am using my Microsoft account to login to the windows11 where the UF resides any suggestions ?   thanks 
Working with TAC, we figured out a solution for the Loading... issue. There is an issue with the browser properly displaying the toolbar, this is mitigated by adjusting the web.conf file There are ... See more...
Working with TAC, we figured out a solution for the Loading... issue. There is an issue with the browser properly displaying the toolbar, this is mitigated by adjusting the web.conf file There are a couple of places where I had to edit the file. $Splunk_Home\etc\system\default\web.conf $Splunk_Home\etc\system\local\web.conf $Splunk_Home\var\run\splunk\merged\web.conf You want to find the "minify_js" and set it to false. minify_js = false By default, it will be set to "true" Restart the Splunk service, flush your browser cache and you should be good to go! (if it does not work, there might be another web.conf file with the minify_js still set to true.  Search through and adjust as needed)    
Only guidance I received was to downgrade however since Splunk doesn't support "downgrades" they provide no guidance. I ended up uninstalling and reinstalling 9.1, confirmed the issue was resolved af... See more...
Only guidance I received was to downgrade however since Splunk doesn't support "downgrades" they provide no guidance. I ended up uninstalling and reinstalling 9.1, confirmed the issue was resolved after a clean install. Then proceeded to restore items/apps/config files 1 by 1.