All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours... See more...
Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours. When performing a lookup on CSV file from this index, only 2 IPs matches, but every time a search is performed in dashboard, it compares 100k IPs with 2 IPs. How do we pre-calculate the search and store the data, so every time a search is performed on a dashboard, it only search for the historical data and it does not have to compare 100k IPs with IPs? Thank you in advanced for your help | index=vulnerability_index | table ip_address, vulnerability, score ip_address         vulnerability                        score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting       8 192.168.1.2 DNS 5 x.x.x.x   ... total IP:100k       company.csv ip_address       company location 192.168.1.1 Comp-A        Loc-A 192.168.1.2 Comp-B Loc-B   | lookup company.csv ip_address as ip_address OUTPUTNEW ip_address, company, location ip_address vulnerability score company location 192.168.1.1 SQL Injection 9 Comp-A Loc-A 192.168.1.1 OpenSSL 7 Comp-A Loc-A 192.168.1.2 Cross Site-Scripting 8 Comp-B Loc-B 192.168.1.2 DNS 5 Comp-B Loc-B  
Command history is logged in LOCAL7 facility, NOTICE level.  You may want to examine /etc/rsyslog.conf (and related conf files) to find out which log file(s) contain local7.notice. According to http... See more...
Command history is logged in LOCAL7 facility, NOTICE level.  You may want to examine /etc/rsyslog.conf (and related conf files) to find out which log file(s) contain local7.notice. According to https://github.com/rsyslog/rsyslog/blob/master/platform/redhat/rsyslog.conf, RedHat default is to send local7.* into /var/log/boot.log.  But your system may have customized settings.  Normally, /var/log/secure is used for authpriv.*, thus it does not contain command history. If the file that contains local7.notice is not ingested, you will need to ingest it. Hope this helps.
That's just setting up dummy data for the example. The mvmap just concatenates ValueX with R.# to make each of the elements of C1 show the value + row number. foreach just makes field C# equal to a ... See more...
That's just setting up dummy data for the example. The mvmap just concatenates ValueX with R.# to make each of the elements of C1 show the value + row number. foreach just makes field C# equal to a random number, where # is a loop from 2, 3, 4 in the foreach.
This problem appears to have been resolved. The confusing AppInspect test error I described before was returned when using AppInspect 2.37.0, but now I see some patch releases have been made and the ... See more...
This problem appears to have been resolved. The confusing AppInspect test error I described before was returned when using AppInspect 2.37.0, but now I see some patch releases have been made and the test passes when using AppInspect 2.37.2.
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder a... See more...
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder and the indexers but still can't make it work. Logs are coming in like this:   Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347491321753 tz="-0500" logid="0001000014" type="traffic" subtype="local" level="notice" vd="vdom1" srcip=xx.xx.xx.xx srcport=3465 srcintf="wan_1" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="client" dstintfrole="undefined" srccountry="Netherlands" dstcountry="Peru" sessionid=1290227282 proto=6 action="close" policyid=0 policytype="local-in-policy" service="HTTPS" trandisp="noop" app="HTTPS" duration=9 sentbyte=1277 rcvdbyte=8294 sentpkt=11 rcvdpkt=12 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347381319603 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=143.137.146.130 srcport=33550 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="United States" sessionid=1290232934 proto=6 action="close" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=2 sentbyte=2370 rcvdbyte=5826 sentpkt=12 rcvdpkt=11 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347443046437 tz="-0500" logid="0000000020" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=52777 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="Peru" sessionid=1289825875 proto=6 action="accept" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=500 sentbyte=1517 rcvdbyte=1172 sentpkt=8 rcvdpkt=7 appcat="unscanned" sentdelta=1517 rcvddelta=1172 Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347481317830 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=18191 srcintf="3050" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=443 dstintf="wan_2" dstintfrole="undefined" srccountry="Peru" dstcountry="Peru" sessionid=1290224387 proto=6 action="timeout" policyid=21 policytype="policy" poluuid="ab285ae0-3d5a-51ee-dce1-3f4aec1e32dc" policyname="PUBLICACION_VDOM" service="HTTPS" trandisp="noop" duration=13 sentbyte=180 rcvdbyte=0 sentpkt=3 rcvdpkt=0 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_2" devid="fortigate_id_2" eventtime=1695157346792901761 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom3" srcip=xx.xx.xx.xx srcport=47767 srcintf="3006" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=8580 dstintf="wan_2" dstintfrole="undefined" srccountry="United States" dstcountry="Peru" sessionid=3499129086 proto=6 action="timeout" policyid=18 policytype="policy" poluuid="9cba23b2-3dfa-51ee-847f-49862ff000c0" policyname="PUBLICACION_VDOM" service="tcp/8580" trandisp="noop" duration=10 sentbyte=40 rcvdbyte=0 sentpkt=1 rcvdpkt=0 appcat="unscanned" srchwvendor="Cisco" devtype="Router" mastersrcmac="xxxxxxxxxxxxxxx" srcmac="xxxxxxxxxxxxxxx" srcserver=0   And the configuration I added into props.conf is the following:   [host::host_ip] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\w{3}\s+\d{1,2}\s\d{2}\:\d{2}\:\d{2}) TIME_PREFIX = eventtime= TIME_FORMAT = %b %d %H:%M:%S   The format is similar to the configuration applied to similar sources so I can't figure out why it isn't working. I'd appreciate any kind of insight you guys could bring. Thanks in advance!    
Sometimes the fix is right there in the documentation itself: https://docs.splunk.com/Documentation/AddOns/released/AWS/Troubleshooting   I fixed the issue by updating splunk-launch.conf file and ... See more...
Sometimes the fix is right there in the documentation itself: https://docs.splunk.com/Documentation/AddOns/released/AWS/Troubleshooting   I fixed the issue by updating splunk-launch.conf file and adding the custom PORT for Management. The latest version of Aws add on doesn't work with custom management port. It only works on 8089.  
Hi Rich,   Brilliant!, thank you thank worked.  
This worked for me on version 8.x recently. Thanks.
The lack of props for the sourcetype is not a good thing.  It means Splunk is guessing about how to interpret the data and may be guessing wrong.  Perhaps Splunk Cloud makes different assumptions abo... See more...
The lack of props for the sourcetype is not a good thing.  It means Splunk is guessing about how to interpret the data and may be guessing wrong.  Perhaps Splunk Cloud makes different assumptions about the data than Splunk Enterprise does. Create an app with good props.conf settings for the sourcetype and install the app in both environments.  That should fix it.
I finally solved the problem by replacing the semicolon  with its URL encoding (%3b) inside the curl command: curl ... -d search="search ... | makemv delim=\"%3b\" hashes | ..." -d output_mode=csv ... See more...
I finally solved the problem by replacing the semicolon  with its URL encoding (%3b) inside the curl command: curl ... -d search="search ... | makemv delim=\"%3b\" hashes | ..." -d output_mode=csv  
To include the time component, we don't need to extract anything.  Just treat StartDateTime as a floating point number. | makeresults | eval StartDateTime="59025.5249306" | eval time=(StartTime-40... See more...
To include the time component, we don't need to extract anything.  Just treat StartDateTime as a floating point number. | makeresults | eval StartDateTime="59025.5249306" | eval time=(StartTime-40587) * 86400 | eval humanTime=strftime(time, "%c")
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the user... See more...
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the username and password are wrong 09-19-2023 21:59:46.953 +0300 WARN FilesystemChangeWatcher [10812 MainTailingThread] - error getting attributes of path "\\192.168.1.142\df\InvalidPasswordAttempts.log": The user name or password is incorrect. although i can access the shared folder with a browser on the UF host with no problems  by the way am using my Microsoft account to login to the windows11 where the UF resides any suggestions ?   thanks 
Working with TAC, we figured out a solution for the Loading... issue. There is an issue with the browser properly displaying the toolbar, this is mitigated by adjusting the web.conf file There are ... See more...
Working with TAC, we figured out a solution for the Loading... issue. There is an issue with the browser properly displaying the toolbar, this is mitigated by adjusting the web.conf file There are a couple of places where I had to edit the file. $Splunk_Home\etc\system\default\web.conf $Splunk_Home\etc\system\local\web.conf $Splunk_Home\var\run\splunk\merged\web.conf You want to find the "minify_js" and set it to false. minify_js = false By default, it will be set to "true" Restart the Splunk service, flush your browser cache and you should be good to go! (if it does not work, there might be another web.conf file with the minify_js still set to true.  Search through and adjust as needed)    
Only guidance I received was to downgrade however since Splunk doesn't support "downgrades" they provide no guidance. I ended up uninstalling and reinstalling 9.1, confirmed the issue was resolved af... See more...
Only guidance I received was to downgrade however since Splunk doesn't support "downgrades" they provide no guidance. I ended up uninstalling and reinstalling 9.1, confirmed the issue was resolved after a clean install. Then proceeded to restore items/apps/config files 1 by 1.
Hi @Jeffrey.Leedy, Thanks for sharing this info. Seems like it might be a bug, so I would recommend contacting Support and letting them know.  How do I submit a Support ticket? An FAQ  If you ... See more...
Hi @Jeffrey.Leedy, Thanks for sharing this info. Seems like it might be a bug, so I would recommend contacting Support and letting them know.  How do I submit a Support ticket? An FAQ  If you hear back from Support, please report back on this thread.
Hi @Everton.Arakaki, Did you ever reach out to Support and get a reply?
Index-time extractions don't have an equivalent to the max_match option of the rex command.  Consider extracting all users together and then extracting them at search time. [get-users] REGEX = (\d:)... See more...
Index-time extractions don't have an equivalent to the max_match option of the rex command.  Consider extracting all users together and then extracting them at search time. [get-users] REGEX = (\d:)(?<user>.+) FORMAT = users::$1  
We just upgraded today to v9.1.1 and are experiencing this "Loading" issue as well. It's too bad that I didn't see this post, we would have delayed our upgrade in favour of another version. Has any... See more...
We just upgraded today to v9.1.1 and are experiencing this "Loading" issue as well. It's too bad that I didn't see this post, we would have delayed our upgrade in favour of another version. Has anyone received guidance on the fix for this, or is everyone reverting back to a previous version?
Hi @Swathi.Srinivasan, I found this documentation. Please check it out and see if it helps. https://docs.appdynamics.com/appd/23.x/latest/en/database-visibility/monitor-databases-and-database-ser... See more...
Hi @Swathi.Srinivasan, I found this documentation. Please check it out and see if it helps. https://docs.appdynamics.com/appd/23.x/latest/en/database-visibility/monitor-databases-and-database-servers/monitor-database-performance/database-dashboard
Hi @Adiaobong.Odungide, I've reached out to the Docs team looking for some clarification. I will report back when I have more info.