All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am experiencing the exact same issue in our deployment after upgrading from 9.0.5 to 9.1.0.2.  Most of the visualisations fail to load, and "Indexing Rate" and "Concurrent Searches" only show "N/A... See more...
I am experiencing the exact same issue in our deployment after upgrading from 9.0.5 to 9.1.0.2.  Most of the visualisations fail to load, and "Indexing Rate" and "Concurrent Searches" only show "N/A".  The new "Splunk Assist" banner at the top of the Overview page also fails to load. 
Hi, I've just come acroos this post and your trellised charts are very similar to something I am tryng the achieve. I am new to Splunk and so may be missing something, but I can;t replicate your la... See more...
Hi, I've just come acroos this post and your trellised charts are very similar to something I am tryng the achieve. I am new to Splunk and so may be missing something, but I can;t replicate your layout. - is the XML you posted necessary to get the layout you have? This is my search: index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<responseCode>\d{1,3})" | rex field=Message ":\s(?<errCount>\d{1,4})$" | bin _time span=1h | stats count by _time, Region, responseCode | eval {responseCode}=count | fields - responseCode, region, count This is a sample of my data: Time responseCode Region errCount 21/11/2022 09:46:07 912 VPMG - Wizink PRD-E5 14 21/11/2022 09:16:31 911 Moneta IBS via VPMG 8 21/11/2022 03:02:07 912 Moneta IBS via VPMG 129 21/11/2022 02:46:59 911 Moneta IBS via VPMG 92 20/11/2022 20:31:38 911 Moneta IBS via VPMG 16 20/11/2022 19:31:36 912 Moneta IBS via VPMG 32 20/11/2022 02:26:45 911 Addiko IBS via VPMG 7   and this is the visualisation I'm trying to acheive (this is done in Power BI): Is this achieveable in Splunk? Thanks, Steve  
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is trigge... See more...
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is triggered for three consecutive days, the alarm is triggered. The current SPL: index="xx" | bin _time span=15m | stats dc(dest_port) as dc_ports by _time src_ip dest_ip | where dc_ports > 10 | streamstats count as consecutive_triggers by src_ip dest_ip reset_on_change=Ture | where consecutive_triggers>=5   Next, I don't know how to query the trigger for the same period for three consecutive days.
We are using a query to get results based on previous month and also date range (dd-mm-yy).. But in our query, we used to get results only for previous month, not for the date range(its not acceptin... See more...
We are using a query to get results based on previous month and also date range (dd-mm-yy).. But in our query, we used to get results only for previous month, not for the date range(its not accepting the double quotes ("") for the number, when I remove "". its not accepting the string :(.. query we used for reference...   Isnum is not working with "", if I remove "". string is not working..pls suggest me the solution, tried different ways but nothing is working.. | eval lnum=if(match("1690848000","^[@a-zA-Z]+"),"str","num"), enum=if(match("1688169600","[a-zA-Z]"),"str","num") | eval latest=case(isnum(1690848000),(1690848000-60),"1690848000"="now",now(),"1690848000"="",now(),lnum!="str","1690848000",1=1,relative_time(now(), "1690848000")) | eval earliest=case(isnum(1688169600),(1688169600-60),"1688169600"="0","0",enum!="str","1688169600",1=1,relative_time(now(), "1688169600"))     I want to have same data for previous month and also date range filters...This is question..
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying ... See more...
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying to resolve the CN mentioned in the certificate?     09-11-2023 11:40:01.284 +0300 WARN X509Verify [1034989 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.285 +0300 WARN X509Verify [1034990 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.286 +0300 WARN X509Verify [1034986 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034786 DistributedPeerMonitorThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:04.005 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch"     Cheers.
Hi @kamlesh_vaghela ,    Thanks for your response!    I had attached the screenshot for your reference, Thanks in Advance!
Hi Splunk Folks, I experience strange behavior after upgrading from 7.0.2 to 7.1.1. After the change we noticed that multiple Correlation Searches normally accessible from Content Management have b... See more...
Hi Splunk Folks, I experience strange behavior after upgrading from 7.0.2 to 7.1.1. After the change we noticed that multiple Correlation Searches normally accessible from Content Management have been somehow "transformed" into Saved Searches and stopped being scheduled. Also many of them when expanding shows Error "No associated objects or datasets found." Additionally, correlation searches which remained in normal shape are opening the CS edit menu like expected, while these which "transformed" to Saved Searches are taking me to Search, reports, and alerts menu. In savedsearches.conf we indeed see all which are corrupted are missing "action.correlationsearch.enabled" and other fields related to correlation searches.   Any ideas what might have happened? Have not found similar issues desribed over the web.
Noted - done thanks for the head up. 
Hi  I am uploading a .csv file to a metric, however, Spunk is changing a "." to a "_" when I use the metrics. Any idea why this is happening? When sending data in via HEC this is not happening. ... See more...
Hi  I am uploading a .csv file to a metric, however, Spunk is changing a "." to a "_" when I use the metrics. Any idea why this is happening? When sending data in via HEC this is not happening.   any help would be great - Cheers   
Hi, My company has recently gone through a re-branding (name change). I was wondering how I should go about changing the Splunkbase company display name (and everything related to that). I trie... See more...
Hi, My company has recently gone through a re-branding (name change). I was wondering how I should go about changing the Splunkbase company display name (and everything related to that). I tried contacting support but have been ghosted for the past month. Any way I can do this myself? Thanks!
Hello all, I am still relatively new to the topic of Splunk and SPL. To show the maximum uptime per day of four hosts in a bar chart, I wrote the following query: sourcetype=datacollection VMBT02... See more...
Hello all, I am still relatively new to the topic of Splunk and SPL. To show the maximum uptime per day of four hosts in a bar chart, I wrote the following query: sourcetype=datacollection VMBT02 OR slaznocaasevm01 OR VMMS01 OR slaznocaasmon01 | rex "Uptime: (?<hours>\d+) hours (?<minutes>\d+) minutes" | rex "Uptime: (?<minutes>\d+) minutes (?<seconds>\d+) seconds" | eval hours = coalesce(hours, 0) | eval minutes = coalesce(minutes, 0) | eval seconds = coalesce(seconds, 0) | eval uptime_decimal = if(minutes > 0 AND seconds > 0, minutes/1000 * 10, hours*1 + minutes/100) | eval formatted_uptime = round(uptime_decimal, 2) | where extracted_hostname IN ("VMBT02", "slaznocaasevm01", "VMMS01", "slaznocaasmon01") | stats max(formatted_uptime) as Uptime by extracted_Hostname I extracted the field "extracted_hostname" via the GUI before. There are also no events here that do not match the regex. Afterwards, the chart is also displayed correctly. However, the field extraction does not work from the 01st to about the 10th of a month. Instead of the hostnames, other data fragments from the first line of an event are taken here. I can't see a pattern here either. Does anyone know where the causes for the incorrect extraction are? Is perhaps my query incorrect? I hope someone can help me to solve this problem. Thanks in advance Many greetings
Hi Vatsal, I lost access to this account, so havent been able to reply until now. This is the query you suggested I try: index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | ded... See more...
Hi Vatsal, I lost access to this account, so havent been able to reply until now. This is the query you suggested I try: index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<responseCode>\d{1,3})" | rex field=Message ":\s(?<errCount>\d{1,4})$" | bin _time span=1h | stats count by _time, Region responseCode | eval {Region}=count | fields - Region, count   I'm not sure what the visualisation is showing me exactly : I can activate a trellis display buy region, but the bars on each graph )when I activate the legend) are labelled as 'responseCode' and the region. All bars are showing as just under 1,000: Again, the Power BI display I am trying to replicate is this: With a timechart of the count of response codes by region, trellised by responsecode.  Here is the sample data for the Power BI report: Time Action responseCode Region errCount 21/11/2022 09:46:07 Resp_VPMG 912 VPMG - Wizink PRD-E5 14 21/11/2022 09:16:31 Resp_VPMG 911 Moneta IBS via VPMG 8 21/11/2022 03:02:07 Resp_VPMG 911 Moneta IBS via VPMG 129 21/11/2022 02:46:59 Resp_VPMG 911 Moneta IBS via VPMG 92 20/11/2022 20:31:38 Resp_VPMG 911 Moneta IBS via VPMG 16 20/11/2022 19:31:36 Resp_VPMG 911 Moneta IBS via VPMG 32 20/11/2022 02:26:45 Resp_VPMG 911 Addiko IBS via VPMG 7   ('Action' is not used).      
Free license or not free license, Splunk UI should not display cryptic error message.  I consider this a usability bug because the license type can easily be handled with clearer message. You can di... See more...
Free license or not free license, Splunk UI should not display cryptic error message.  I consider this a usability bug because the license type can easily be handled with clearer message. You can disable Splunk Assist.  But that will not solve this problem. To disable the application, go to $SPLUNK_HOME/etc/apps/splunk_assist.  Create a directory local/, then create a file app.conf to override allow_disable=true in [install] stanza, like this:   cd $SPLUNK_HOME/etc/apps/splunk_assist mkdir local cat <<EOM >local/app.conf [install] allows_disable = false EOM       After restarting Splunk, you can then disable splunk_assist.  But the error will persist even when Splunk Assist is disabled.  BTW, if you launch splunk_assist directly, you will be able to see the error message Unable to obtain template "beam:/templates/start.html": ... TopLevelLookupException(_("Splunk has failed to locate the template for uri '%s'." % uri)) TopLevelLookupException: Splunk has failed to locate the template for uri 'beam:/templates/start.html'. The problem is that a 9.1 upgrade enables a feature called enable_home_vnext.  It is a fine feature (with tons of marketing plugs) except for this bug: Splunk should detect the free license and handle the error without showing a cryptic error message. (It can show, for example, "this feature is unavailable for the installed license" like it does with a bunch of other features.) Workaround The only workaround I have discovered so far is to disable enable_home_vnext in [feature:page_migration] stanza from web-features.conf.  ([feature:page_migration] is introduced in 9.x but is in web.conf. web-features.conf is introduced in 9.1.1; a skeleton local/web-features.conf is created by installer but the stanza is not in it.)     cd $SPLUNK_HOME/etc/system/local cat <<EOM>>web-features.conf [feature:page_migration] enable_home_vnext = false EOM     After this, you will see your previous launcher home page instead of the migration page. (You may need to to restart splunkd.  I had at least one instance for which I did not perform restart after editing.)  I hope Splunk will not take the legacy launch page away before fixing the bug in the new page. What changed? The new index _configtracker added in 9.x makes it much easier to track changes made over time.  Because I was upgrading from 9.0.5, I can see the exact changes 9.1 upgrade has made.  For example, to see web feature changes,     index="_configtracker" (data.path=*/system/default/web-features.conf) | spath path=data.changes{} | fields - data.changes{}.* | mvexpand data.changes{} | spath input=data.changes{} | spath input=data.changes{} path=properties{} | fields - properties{}.* | mvexpand properties{} | spath input=properties{} | stats latest(*_value) as *_value by data.action name stanza data.path _time | eval data.path = replace('data.path', ".*/[sS]plunk/etc", "") | fieldformat _time = strftime(_time, "%F") | table name *_value stanza data.* _time     On my laptop instance (defaults), this shows name new_value old_value stanza data.action data.path _time disable_highcharts_accessibility false   feature:highcharts_accessibility update /system/default/web-features.conf 2023-09-08 enable_acuif_pages false   feature::windows_rce update /system/default/web-features.conf 2023-09-08 enable_autoformatted_comments false   feature:search_auto_format update /system/default/web-features.conf 2023-09-08 enable_dashboards_external_content_restriction true   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 enable_dashboards_redirection_restriction true   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 enable_events_viz true   feature:dashboard_studio update /system/default/web-features.conf 2023-09-08 enable_home_vnext true   feature:page_migration update /system/default/web-features.conf 2023-09-08 enable_inputs_on_canvas true   feature:dashboard_studio update /system/default/web-features.conf 2023-09-08 enable_jQuery2 false true feature:quarantine_files update /system/default/web-features.conf 2023-09-08 enable_search_v2_endpoint false   feature:search_v2_endpoint update /system/default/web-features.conf 2023-09-08 enable_share_job_control true   feature:share_job update /system/default/web-features.conf 2023-09-08 enable_show_hide true   feature:dashboard_studio update /system/default/web-features.conf 2023-09-08 enable_triggered_alerts_vnext true   feature:page_migration update /system/default/web-features.conf 2023-09-08 enable_unsupported_hotlinked_imports false true feature:quarantine_files update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.flowmilldocs docs.flowmill.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.rigorhelp help.rigor.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkapps apps.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkbase splunkbase.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkbasesplunk splunkbase.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkdev dev.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkdocs docs.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkdownload www.splunk.com/download   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunklantern lantern.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkproducts www.splunk.com/products   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.splunkui splunkui.splunk.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 internal.dashboards_trusted_domain.victoropshelp help.victorops.com   feature:dashboards_csp update /system/default/web-features.conf 2023-09-08 I see that you just upgraded from 8 to 9 so this new index doesn't contain data from your upgrade.  But you can manually ingest $SPLUNK_HOME/var/log/splunkd/migration.log.* as a one-time job, then do a similar search     source=*/splunk/migration.log.* (data.path=*/system/default/web-features.conf) | spath path=data.changes{} | fields - data.changes{}.* | mvexpand data.changes{} | spath input=data.changes{} | spath input=data.changes{} path=properties{} | fields - properties{}.* | mvexpand properties{} | spath input=properties{} | stats latest(*_value) as *_value by data.action name stanza data.path _time | eval data.path = replace('data.path', ".*/[sS]plunk/etc", "") | fieldformat _time = strftime(_time, "%F") | table name *_value stanza data.* _time     If your system has lots of history, the output will be numerous.  You should probably limit search to the last archived migration.log. (The current one is created after your upgrade, so historic data is not in.) Hope this helps
Hi @Cranie , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi All ,  We are trying to use the CyberArk Identity functionality in DB connect add-on . we created the safe in cyberArk and firewall is open .When we tried to create new cyberArk identity from ... See more...
Hi All ,  We are trying to use the CyberArk Identity functionality in DB connect add-on . we created the safe in cyberArk and firewall is open .When we tried to create new cyberArk identity from UI (i.e  DB connect - > configuration - > identities -> NewCyberARk ) we are gettting the below error message  "com.splunk.dbx.server.cyberark.exceptions.CredentialsRequestException: Error has occurred while getting password from CyberArk." 2023-09-1118:55:24.739 +1000 [dw-28-POST/api/identities] ERRORc.s.dbx.server.cyberark.runner.CyberArkAccessImpl-UnsuccessfulresponsefromCyberArkwitherror <!DOCTYPEhtmlPUBLIC "-//W3C//DTDXHTML1.0Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <htmlxmlns="http://www.w3.org/1999/xhtml"> <head> <metahttp-equiv="Content-Type" content="text/html; charset=iso-8859-1"/> <title>403-Forbidden:Accessisdenied.</title> <styletype="text/css"> <!-- body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;} fieldset{padding:015px10px15px;} h1{font-size:2.4em;margin:0;color:#FFF;} h2{font-size:1.7em;margin:0;color:#CC0000;} h3{font-size:1.2em;margin:10px000;color:#000000;} #header{width:96%;margin:0000;padding:6px2%6px2%;font-family:"trebuchetMS", Verdana, sans-serif;color:#FFF; background-color:#555555;} #content{margin:0002%;position:relative;} .content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;} --> </style> </head> <body> <divid="header"><h1>ServerError</h1></div> <divid="content"> <divclass="content-container"><fieldset> <h2>403-Forbidden:Accessisdenied.</h2> <h3>Youdonothavepermissiontoviewthisdirectoryorpageusingthecredentialsthatyousupplied.</h3> </fieldset></div> </div> </body> </html> when we tried via curl command we can able to fetch password from cyberark without  any error . Can you please share how to add the certificate details in CyberArk identity  . In the splunk documentation it just enter the public certificate text .
  Hello Splunkers!! As pe the attached screenshot I want to hide values from sep 2022 to july 2023, because those period have a null values. So I want to showcase graph only with values.  
I could not get the solution that @yuanliu gave (in the way I needed).  I have managed to get this to work, many many thanks.
Yer I guess I was more interested in pushing up the changes we have already made, and before we started with our next lot of changes seeing if we could merge them in.. if that is the case we might ju... See more...
Yer I guess I was more interested in pushing up the changes we have already made, and before we started with our next lot of changes seeing if we could merge them in.. if that is the case we might just keep our own branch.. but of a shame really, opening it up can produce some pretty good changes (and not waiting on splunk to implement them).
the error message appear in the message tab with yellow warning sign. and this just gone like I did nothing.  thanks for your answer. I'll try yours if I have this again.  Thank you.
Hi as you can see from https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Upgrade there are something which you must/should check before you could do an update.  r. Ismo