All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours i... See more...
Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours it is a failure.  So, lets say --- invite was sent 11/14/21 and it is received on 11/16/21 this is a failure.  The start time would not be now() or relateive_time function because the start time would be the time the invite was sent.  Any help is greatly appreciated. 
HI, I am having some logs comes with XML format for Privilaged Access Manager, i need to extract the fields by default like this Example: fields <level></level> <k>=<v> commandinitiatio =USER <... See more...
HI, I am having some logs comes with XML format for Privilaged Access Manager, i need to extract the fields by default like this Example: fields <level></level> <k>=<v> commandinitiatio =USER <success>false</success> success=false jan 9 09:04:56 1.30.124.24 1 2012-01-09T14:04:56+00:00 yahoota.com pam - metric DETAIL <Metric><type>getAccount</type><level>1</level><description><hashmap><k>commandInitiator</k><v>USER</v><k>commandName</k><v>getAccount</v><k>clientType</k><v>java</v><k>osarch</k><v>amd64</v><k>targetServerAlias</k><v>USR_LOCL_MARKETING_INTELLIGENCE_BATCH</v><k>nodeid</k><v>&lt;?xml version="1.0" encoding="utf-8" ?&gt;&lt;nodeid&gt;&lt;macaddr&gt;&lt;/macaddr&gt;&lt;macaddr&gt:E0&lt;/macaddr&gt;&lt;macaddr&gt;A0:C:E0&lt;/macaddr&gt;&lt;macaddr&gt;:E0&lt;/macaddr&gt;&lt;macaddr&gt;1:E1&lt;/macaddr&gt;&lt;macaddr&gt;1:E3&lt;/macaddr&gt;&lt;macaddr&gt;:E2&lt;/macaddr&gt;&lt;machineid&gt;1E3&lt;/machineid&gt;&lt;applicationtype&gt;cspm&lt;/applicationtype&gt;&lt;/nodeid&gt;</v><k>enablefips</k><v>true</v><k>executionUID</k><v>bibatusr</v><k>version</k><v>4.5.3</v><k>scriptStat</k><v>/opt/ibm/bigintegrate/tools</v><k>scriptName</k><v>/home/infra/bibatusr/run_ds_job.sh</v><k>osversion</k><v>3.10.0-1127.el7.x86_64</v><k>digestLoginDate</k><k>applicationtype</k><v>cspm</v><k>getXMLIndicator</k><v>false</v></hashmap></description><errorCode>405</errorCode><userID>client</userID><success>false</success><originatingIPAddress>10.111.211.50</originatingIPAddress><originatingHostName>yahoota.com</originatingHostName><extensionType></extensionType></Metric>   I need this as default when these type logs integrated to splunk its automatically extract fields when we set up in props.conf and trasforms.conf so here what is the stanzas we have to write under this.
Good day, I am trying to get alerts via teams channel.. I followed the instructions on splunk docs on how to get webhook and to add to alert, but still the alert is triggering via telegram and not vi... See more...
Good day, I am trying to get alerts via teams channel.. I followed the instructions on splunk docs on how to get webhook and to add to alert, but still the alert is triggering via telegram and not via Teams, what should I look at?
Hi,    I want to send data to x index if the host is non prod and host name is like abc-nprd* for  /var/log However, would like to send data to y index if host is prod and host name is like abc-pr... See more...
Hi,    I want to send data to x index if the host is non prod and host name is like abc-nprd* for  /var/log However, would like to send data to y index if host is prod and host name is like abc-prd* for /var/log Don't want to create multiple apps for prod and non prod. So is there is way I can achieve the above by deploying the same app to prod and non prod.   Any help appreciated.  Thanks.  
I have a filename like this -11112021_MOS.csv -12112021_MOS.csv -13112021_MOS.csv   I want to create drop down based on the date. How can I do that?
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anythin... See more...
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anything on the ES Splunk UI I miss that can undo my mistake? Your help is much appreciated! Thank you all. 
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anythin... See more...
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anything on the ES Splunk UI I miss that can undo my mistake? Your help is much appreciated! Thank you all. 
cert.pem (/splunk/auth/splunkweb/cert.pem) not getting generated while trying to renew the certificate and restarting splunk service. Though server.pem (/splunk/auth/server.pem) gets generated when r... See more...
cert.pem (/splunk/auth/splunkweb/cert.pem) not getting generated while trying to renew the certificate and restarting splunk service. Though server.pem (/splunk/auth/server.pem) gets generated when restarting splunk service.   any help please?
Hi i have a log like this  Elapsed time: prediction timer 0.1953 seconds    and i created a rex like this rex "Elapsed\stime:\sprediction\stimer\s(?<predictionTime>\d+)\sseconds"   but i am unabl... See more...
Hi i have a log like this  Elapsed time: prediction timer 0.1953 seconds    and i created a rex like this rex "Elapsed\stime:\sprediction\stimer\s(?<predictionTime>\d+)\sseconds"   but i am unable to find the value at all what am i missing here ? any help would be appreciated 
Hi Team, @DalJeanis  I am trying to achieve below splunk search query to find out all the errors that are causing JVM instability.       for-each host : hosts (list of hosts) for-each jvmerror... See more...
Hi Team, @DalJeanis  I am trying to achieve below splunk search query to find out all the errors that are causing JVM instability.       for-each host : hosts (list of hosts) for-each jvmerrorevent(event_time, early15minofevent) : jvmerrorevents (search1 will result a table (list of event_time, even_time-15 minutes as early15minofevent)) result+ = list of errors (search2 = search1+select list of errors occurred between early15minofevent and event_time) return result                                  Below query resulting error. Please suggest if any better way to achieve this. Thanks in advance. index="123apigee"  sourcetype="msg_system_log" (host="123") "ERROR JVM OUT OF MEMORY ERROR" | eval customtime= strftime(_time, "%Y-%m-%d %I:%M:%S.%3Q") | eval 15MinEarlyofEvent= strftime(_time - 900, "%Y-%m-%d %I:%M:%S.%3Q") | table 15MinEarlyofEvent,customtime | map search="search index=123apigee sourcetype=msg_system_log host=123 ERROR | _time=strftime($customtime$, "%s")"          Regards, Nandini G
Hello everyone, I am currently developing a use case in which I have the below info: Username User Status User Code Time of Event per User Status update user A 0 0 1 1 x... See more...
Hello everyone, I am currently developing a use case in which I have the below info: Username User Status User Code Time of Event per User Status update user A 0 0 1 1 xxxxx 2021-11-13 22:22:15 2021-11-13 23:40:09 2021-11-13 23:45:09 2021-11-13 23:50:09 user B 0 1 yyyyy 2021-11-13 22:40:09 2021-11-13 22:50:09 user A 0 1 1 ggggg 2021-11-13 22:50:09 2021-11-13 22:55:09 2021-11-13 22:58:09   I would like  to find for each user the time difference between the timestamps of the first occurrence of status 0 and then the first occurrence of status 1. So based on the above table, I would like to extract the timestamp of 2021-11-13 22:22:15 for User Status 0 and the timestamp 2021-11-13 23:45:09 for User Status 1 for the user A.   So my search query so far looks like it:   | my index | sort UserStatus | transaction mvlist=true Username UserStatus | search eventcount >1 | UserStatus =0 and UserStatus=1   Any help will be much appreciated! Thanks 
I'd like to play with the BOTS v3 dataset.  It requires Enterprise 7.1.7 and downloading of older releases does not show anything before version 8.  Does anyone how to get this older version of Splun... See more...
I'd like to play with the BOTS v3 dataset.  It requires Enterprise 7.1.7 and downloading of older releases does not show anything before version 8.  Does anyone how to get this older version of Splunk Enterprise?  Thanks!
Hi Folks, I have a bar chart where I have more then one bars and legends for a single day, If I click on a single bar it works fine and shows the next level of dashboard as expected which takes si... See more...
Hi Folks, I have a bar chart where I have more then one bars and legends for a single day, If I click on a single bar it works fine and shows the next level of dashboard as expected which takes single day date/time for query  and when I hover over any legend it selects all the bars which are related to that specific legend which I assume is normal behavior also but problem I am facing with Legend is when I click on any legend it prompts to next level of dashboard which taken only a single day date/time and fails to show next level of dashboard level. what can I do with Legend? I  am sure if I can disable Legends so they are not clickable? I don't want to hide legend as we need them in this panel and they are user friendly.  
Thanks in advance for any help. I'm trying to find the days that a Device has not been patched for Critical Severity vulnerability (currently not patched). The example below should return 3 days for... See more...
Thanks in advance for any help. I'm trying to find the days that a Device has not been patched for Critical Severity vulnerability (currently not patched). The example below should return 3 days for Device Server01.  Tried stats and streamstats but not able to get it to to produce below results Device Message _time Server01 Severity Critical Patch Missing 11/1/2021 2PM Server01 Ok (Fully Patched) 11/2/2021 2PM Server01 Severity Critical Patch Missing 11/3/2021 2PM Server01 Severity Critical Patch Missing 11/3/2021 6PM Server01 Severity Critical Patch Missing 11/4/2021 2PM Server01 Severity Critical Patch Missing 11/5/2021 6PM (latest event)
Hello   Can I get notified when the search code of a dashboard changes? I am not admin/owner.   Thanks!
Hello How i can get the full name from log ie. Name=Busaram Manjraj i am trying with this regex |rex field=-_raw "(?<Name>[^&]+)\s*\d*" but it is giving just Name=Busaram not the full name. Spl... See more...
Hello How i can get the full name from log ie. Name=Busaram Manjraj i am trying with this regex |rex field=-_raw "(?<Name>[^&]+)\s*\d*" but it is giving just Name=Busaram not the full name. Splunk raw data looks like Name=Busaram, Manjraj
Hello,  We are integrating the json logs via HEC into Splunk Heavy Forwarder. I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexe... See more...
Hello,  We are integrating the json logs via HEC into Splunk Heavy Forwarder. I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexes and I would want to route it to different indexes based on log files and route all the other files not required to a null queue. I would not be able to use FORMAT=indexqueue in transforms.conf as I cannot mention multiple indexes in inputs.conf .This is not working and I am not getting results as expected. Kindly help. The configs are like below: PROPS.CONF -- [source::*model-app*] TRANSFORMS-segment=setnull,security_logs,application_logs,provisioning_logs TRANSFORMS.CONF -- [setnull] REGEX=class\"\:\"(.*?)\" DEST_KEY = queue FORMAT = nullQueue [security_logs] REGEX=(class\"\:\"(/var/log/cron|/var/log/audit/audit.log|/var/log/messages|/var/log/secure)\") DEST_KEY=_MetaData:Index FORMAT=model_sec WRITE_META=true LOOKAHEAD=40000 [application_logs] REGEX=(class\"\:\"(/var/log/application.log|/var/log/local*?.log)\") DEST_KEY=_MetaData:Index FORMAT=model_app WRITE_META=true LOOKAHEAD=40000 [provisioning_logs] REGEX=class\"\:\"(/opt/provgw-error_msg.log|/opt/provgw-bulkrequest.log|/opt/provgw/provgw-spml_command.log.*?)\" DEST_KEY=_MetaData:Index FORMAT=model_prov WRITE_META=true
Hi everyone, I want to track/find which user used which dashboard at what time. I am able to do this by the information in index=_internal. However, I also want to find out what dashboard filters t... See more...
Hi everyone, I want to track/find which user used which dashboard at what time. I am able to do this by the information in index=_internal. However, I also want to find out what dashboard filters the users added when they were looking at the dashboard. We store this filter information in the dashboard URL. Is there maybe somewhere that Splunk collects the URLs of the dashboards visited? Then I can use a regex to scrape out the information I need. Or is there somewhere else I can find this information?
Me and another engineer were taking a look at `index=corelight sourcetype=corelight_notice signature="Scan::*"`. We noticed that `src` was not properly parsed given `kv_mode=auto`. We've attempte... See more...
Me and another engineer were taking a look at `index=corelight sourcetype=corelight_notice signature="Scan::*"`. We noticed that `src` was not properly parsed given `kv_mode=auto`. We've attempted the follwing four course of action: 1. performed an EXTRACT on _raw as : "src":"(?<src>[^"]+)", 2. performed a REPORT as: corelight_notice_src * with a transform as `"src":"(?<src>[^"]+)",` on _raq 3. perform an EXTRACT on _raw as : \"src\":\"(?<src>[^\"]+)\", 4. * performed a REPORT as: corelight_notice_src * with a transform as `* \"src\":\"(?<src>[^\"]+)\",` Note that performing the `| rex field=_raw "\"src\":\"(?<src>[^\"]+)\","` at search time works fine. We also attempted with `AUTO_KV_JSON = false` with the above tests 3 and 4, which failed. We also attempted with `AUTO_KV_JSON = false` and `KV_MODE = none` with the above tests 3 and 4, which failed Note that the following works: ``` index=corelight sourcetype=corelight_notice signature="Scan::*" | spath output=src path=src ``` When AUTO_KV_JSON=true, then most JSON fields are extracted (except for src). When AUTO_KV_JSON=true and KV_MODE=json, then most JSON fields are extracted (except for src).   Any ideas on what the problem is?   ``` {"_path":"notice","_system_name":"zEEK01","_write_ts":"2021-11-12T23:22:24.722517Z","ts":"2021-11-12T23:22:24.722517Z","note":"Scan::Address_Scan","msg":"kk: 192.168.0.1 scanned at least 27 unique hosts on ports 443/tcp, 80/tcp in 42m29s","sub":"local","src":"192.168.0.1","peer_descr":"proxy-01","actions":["Notice::ACTION_LOG"],"suppress_for":1,"severity.level":3,"severity.name":"error"} ``` Thanks, Matt
Would anyone be willing to partner with me on creating a Splunk add-on or an app, for IBM Aspera HSTS (High Speed Transfer Server, formerly "Enterprise Server")? I've spent a couple of years writing... See more...
Would anyone be willing to partner with me on creating a Splunk add-on or an app, for IBM Aspera HSTS (High Speed Transfer Server, formerly "Enterprise Server")? I've spent a couple of years writing arcane SPL to extract KVs and create dashboards like this - but am rather clueless about how to make that knowledge useful to others and formalize what I learned into an add-on or an app. I am assuming the first step is create a CIM, and then create an add-on around that CIM - yet like I said, I am mostly clueless about it. Here is what I have: lots of logs (a few GBs) from a couple Windows Aspera server in my team a few ad-hoc dashboards full of regex-based queries doing KV extraction (quite a few were done with amazing @to4kawa's help) IBM's unofficial guide to deciphering their logs time and energy to test and tune an app and to continuously collaborate on it:) I'd be very grateful for any help! (If you're willing to help, no prior Aspera HSTS knowledge is needed - only knowledge on how to craft apps and add-ons.) Thanks! P.S. The potential user base for such an app or add-on does not seem to be huge - yet the software is quite popular and there doesn't seem to be much of an effort to create appropriate CIMs or add-ons for it.