All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, @DalJeanis  I am trying to achieve below splunk search query to find out all the errors that are causing JVM instability.       for-each host : hosts (list of hosts) for-each jvmerror... See more...
Hi Team, @DalJeanis  I am trying to achieve below splunk search query to find out all the errors that are causing JVM instability.       for-each host : hosts (list of hosts) for-each jvmerrorevent(event_time, early15minofevent) : jvmerrorevents (search1 will result a table (list of event_time, even_time-15 minutes as early15minofevent)) result+ = list of errors (search2 = search1+select list of errors occurred between early15minofevent and event_time) return result                                  Below query resulting error. Please suggest if any better way to achieve this. Thanks in advance. index="123apigee"  sourcetype="msg_system_log" (host="123") "ERROR JVM OUT OF MEMORY ERROR" | eval customtime= strftime(_time, "%Y-%m-%d %I:%M:%S.%3Q") | eval 15MinEarlyofEvent= strftime(_time - 900, "%Y-%m-%d %I:%M:%S.%3Q") | table 15MinEarlyofEvent,customtime | map search="search index=123apigee sourcetype=msg_system_log host=123 ERROR | _time=strftime($customtime$, "%s")"          Regards, Nandini G
Hello everyone, I am currently developing a use case in which I have the below info: Username User Status User Code Time of Event per User Status update user A 0 0 1 1 x... See more...
Hello everyone, I am currently developing a use case in which I have the below info: Username User Status User Code Time of Event per User Status update user A 0 0 1 1 xxxxx 2021-11-13 22:22:15 2021-11-13 23:40:09 2021-11-13 23:45:09 2021-11-13 23:50:09 user B 0 1 yyyyy 2021-11-13 22:40:09 2021-11-13 22:50:09 user A 0 1 1 ggggg 2021-11-13 22:50:09 2021-11-13 22:55:09 2021-11-13 22:58:09   I would like  to find for each user the time difference between the timestamps of the first occurrence of status 0 and then the first occurrence of status 1. So based on the above table, I would like to extract the timestamp of 2021-11-13 22:22:15 for User Status 0 and the timestamp 2021-11-13 23:45:09 for User Status 1 for the user A.   So my search query so far looks like it:   | my index | sort UserStatus | transaction mvlist=true Username UserStatus | search eventcount >1 | UserStatus =0 and UserStatus=1   Any help will be much appreciated! Thanks 
I'd like to play with the BOTS v3 dataset.  It requires Enterprise 7.1.7 and downloading of older releases does not show anything before version 8.  Does anyone how to get this older version of Splun... See more...
I'd like to play with the BOTS v3 dataset.  It requires Enterprise 7.1.7 and downloading of older releases does not show anything before version 8.  Does anyone how to get this older version of Splunk Enterprise?  Thanks!
Hi Folks, I have a bar chart where I have more then one bars and legends for a single day, If I click on a single bar it works fine and shows the next level of dashboard as expected which takes si... See more...
Hi Folks, I have a bar chart where I have more then one bars and legends for a single day, If I click on a single bar it works fine and shows the next level of dashboard as expected which takes single day date/time for query  and when I hover over any legend it selects all the bars which are related to that specific legend which I assume is normal behavior also but problem I am facing with Legend is when I click on any legend it prompts to next level of dashboard which taken only a single day date/time and fails to show next level of dashboard level. what can I do with Legend? I  am sure if I can disable Legends so they are not clickable? I don't want to hide legend as we need them in this panel and they are user friendly.  
Thanks in advance for any help. I'm trying to find the days that a Device has not been patched for Critical Severity vulnerability (currently not patched). The example below should return 3 days for... See more...
Thanks in advance for any help. I'm trying to find the days that a Device has not been patched for Critical Severity vulnerability (currently not patched). The example below should return 3 days for Device Server01.  Tried stats and streamstats but not able to get it to to produce below results Device Message _time Server01 Severity Critical Patch Missing 11/1/2021 2PM Server01 Ok (Fully Patched) 11/2/2021 2PM Server01 Severity Critical Patch Missing 11/3/2021 2PM Server01 Severity Critical Patch Missing 11/3/2021 6PM Server01 Severity Critical Patch Missing 11/4/2021 2PM Server01 Severity Critical Patch Missing 11/5/2021 6PM (latest event)
Hello   Can I get notified when the search code of a dashboard changes? I am not admin/owner.   Thanks!
Hello How i can get the full name from log ie. Name=Busaram Manjraj i am trying with this regex |rex field=-_raw "(?<Name>[^&]+)\s*\d*" but it is giving just Name=Busaram not the full name. Spl... See more...
Hello How i can get the full name from log ie. Name=Busaram Manjraj i am trying with this regex |rex field=-_raw "(?<Name>[^&]+)\s*\d*" but it is giving just Name=Busaram not the full name. Splunk raw data looks like Name=Busaram, Manjraj
Hello,  We are integrating the json logs via HEC into Splunk Heavy Forwarder. I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexe... See more...
Hello,  We are integrating the json logs via HEC into Splunk Heavy Forwarder. I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexes and I would want to route it to different indexes based on log files and route all the other files not required to a null queue. I would not be able to use FORMAT=indexqueue in transforms.conf as I cannot mention multiple indexes in inputs.conf .This is not working and I am not getting results as expected. Kindly help. The configs are like below: PROPS.CONF -- [source::*model-app*] TRANSFORMS-segment=setnull,security_logs,application_logs,provisioning_logs TRANSFORMS.CONF -- [setnull] REGEX=class\"\:\"(.*?)\" DEST_KEY = queue FORMAT = nullQueue [security_logs] REGEX=(class\"\:\"(/var/log/cron|/var/log/audit/audit.log|/var/log/messages|/var/log/secure)\") DEST_KEY=_MetaData:Index FORMAT=model_sec WRITE_META=true LOOKAHEAD=40000 [application_logs] REGEX=(class\"\:\"(/var/log/application.log|/var/log/local*?.log)\") DEST_KEY=_MetaData:Index FORMAT=model_app WRITE_META=true LOOKAHEAD=40000 [provisioning_logs] REGEX=class\"\:\"(/opt/provgw-error_msg.log|/opt/provgw-bulkrequest.log|/opt/provgw/provgw-spml_command.log.*?)\" DEST_KEY=_MetaData:Index FORMAT=model_prov WRITE_META=true
Hi everyone, I want to track/find which user used which dashboard at what time. I am able to do this by the information in index=_internal. However, I also want to find out what dashboard filters t... See more...
Hi everyone, I want to track/find which user used which dashboard at what time. I am able to do this by the information in index=_internal. However, I also want to find out what dashboard filters the users added when they were looking at the dashboard. We store this filter information in the dashboard URL. Is there maybe somewhere that Splunk collects the URLs of the dashboards visited? Then I can use a regex to scrape out the information I need. Or is there somewhere else I can find this information?
Me and another engineer were taking a look at `index=corelight sourcetype=corelight_notice signature="Scan::*"`. We noticed that `src` was not properly parsed given `kv_mode=auto`. We've attempte... See more...
Me and another engineer were taking a look at `index=corelight sourcetype=corelight_notice signature="Scan::*"`. We noticed that `src` was not properly parsed given `kv_mode=auto`. We've attempted the follwing four course of action: 1. performed an EXTRACT on _raw as : "src":"(?<src>[^"]+)", 2. performed a REPORT as: corelight_notice_src * with a transform as `"src":"(?<src>[^"]+)",` on _raq 3. perform an EXTRACT on _raw as : \"src\":\"(?<src>[^\"]+)\", 4. * performed a REPORT as: corelight_notice_src * with a transform as `* \"src\":\"(?<src>[^\"]+)\",` Note that performing the `| rex field=_raw "\"src\":\"(?<src>[^\"]+)\","` at search time works fine. We also attempted with `AUTO_KV_JSON = false` with the above tests 3 and 4, which failed. We also attempted with `AUTO_KV_JSON = false` and `KV_MODE = none` with the above tests 3 and 4, which failed Note that the following works: ``` index=corelight sourcetype=corelight_notice signature="Scan::*" | spath output=src path=src ``` When AUTO_KV_JSON=true, then most JSON fields are extracted (except for src). When AUTO_KV_JSON=true and KV_MODE=json, then most JSON fields are extracted (except for src).   Any ideas on what the problem is?   ``` {"_path":"notice","_system_name":"zEEK01","_write_ts":"2021-11-12T23:22:24.722517Z","ts":"2021-11-12T23:22:24.722517Z","note":"Scan::Address_Scan","msg":"kk: 192.168.0.1 scanned at least 27 unique hosts on ports 443/tcp, 80/tcp in 42m29s","sub":"local","src":"192.168.0.1","peer_descr":"proxy-01","actions":["Notice::ACTION_LOG"],"suppress_for":1,"severity.level":3,"severity.name":"error"} ``` Thanks, Matt
Would anyone be willing to partner with me on creating a Splunk add-on or an app, for IBM Aspera HSTS (High Speed Transfer Server, formerly "Enterprise Server")? I've spent a couple of years writing... See more...
Would anyone be willing to partner with me on creating a Splunk add-on or an app, for IBM Aspera HSTS (High Speed Transfer Server, formerly "Enterprise Server")? I've spent a couple of years writing arcane SPL to extract KVs and create dashboards like this - but am rather clueless about how to make that knowledge useful to others and formalize what I learned into an add-on or an app. I am assuming the first step is create a CIM, and then create an add-on around that CIM - yet like I said, I am mostly clueless about it. Here is what I have: lots of logs (a few GBs) from a couple Windows Aspera server in my team a few ad-hoc dashboards full of regex-based queries doing KV extraction (quite a few were done with amazing @to4kawa's help) IBM's unofficial guide to deciphering their logs time and energy to test and tune an app and to continuously collaborate on it:) I'd be very grateful for any help! (If you're willing to help, no prior Aspera HSTS knowledge is needed - only knowledge on how to craft apps and add-ons.) Thanks! P.S. The potential user base for such an app or add-on does not seem to be huge - yet the software is quite popular and there doesn't seem to be much of an effort to create appropriate CIMs or add-ons for it.
I created a Custom Command to generate Events from a REST-API.     [cmdb] filename = cmdb.py generating = true chunked = true supports_multivalues = true       Command runs. The problem is tha... See more...
I created a Custom Command to generate Events from a REST-API.     [cmdb] filename = cmdb.py generating = true chunked = true supports_multivalues = true       Command runs. The problem is that there are different field sets per Host, where I'm looping through.  But I only get the fields where all the hosts have an entry.  Example: Host Field a Field b foo Value Value foobar Value   bar Value Value fbar Value     Field b will not show up in the Splunk Results List. Code sample. I add the patch report only to those hosts, which have on.     if len(sorted_patch_report) > 0: sorted_patch_report = (sorted_patch_report[0]) sorted_patch_report_renamed = {"Patching_" + str(key): val for key, val in sorted_patch_report.items()} i.update(sorted_patch_report_renamed) #self.logger.fatal(i) yield dict(i) else: #except IndexError: #sorted_patch_report = 'null' self.logger.info("No patch report for "+i['fullQualifiedDomainName']) #self.logger.fatal(i) yield dict(i)     If I print the dict to logger I see all the fields. Any Idea?
I'm working with an Google Super Admin and I'm trying to get Google DLP Logs into Splunk Cloud.   There is a HEC that is set up and the majority of the logs are flowing into Splunk via the HTTP Even... See more...
I'm working with an Google Super Admin and I'm trying to get Google DLP Logs into Splunk Cloud.   There is a HEC that is set up and the majority of the logs are flowing into Splunk via the HTTP Event Collector however, the problem I'm running into is that from the Google Admin Console, I can see and search the DLP logs BUT those logs, when I search in Splunk are not there. Google Work Space logs are coming in and the Super Admin states that he is sending everything on their side into Splunk.
I need to understand how to integrate oracle netsuite logs with splunk. I tried searching but I am unable to find a proper method for this . Please help.
Where can i find more information about Nutantix to Splunk cloud integration. I know there's a app for Nutanix Flow Central (FSC) Splunk application ****Not available for Splunk Cloud That was 2ye... See more...
Where can i find more information about Nutantix to Splunk cloud integration. I know there's a app for Nutanix Flow Central (FSC) Splunk application ****Not available for Splunk Cloud That was 2years ago. Does Splunk have a version tah works for Splunk Cloud?
Hello!   I'm trying to build out a lookup of services on specific servers that I want to know when they've stopped. But I wanted to use wildcards for servers so I didn't need to type out a lot of s... See more...
Hello!   I'm trying to build out a lookup of services on specific servers that I want to know when they've stopped. But I wanted to use wildcards for servers so I didn't need to type out a lot of servers.   This is the some sample data and the base of the search that I've been playing with. host Name severity failuresAllowed server1234 service1 low 3 server1* service2 high 1 server2* service3 medium 2   index=windows source=service earliest=-20m [inputlookup Windows_App_Services.csv | table host Name ] | stats count(eval(if(State!="Running",1,null()))) as failureCount by host Name | join host Name type=outer [inputlookup Windows_App_Services.csv] The first inputlookup pulls in just the server name and service we're looking at so that I can search only those events. Then I count how many of those events have a State of not running so I know how many times in the 20 minute lookup back period they haven't been running. Then I'd like to pull in severity and failuresAllowed so that I can use those to calculate severity in ITSI, but when I try to do the join it does not work because the host doesn't match what's in the lookup since it's wildcarded.   I've tried creating a wildcard match_type on that lookup, but that doesn't seem to help me. Anyone have any ideas?   Thanks for your help!
I'm currently attempting to setup an environment using https://github.com/splunk/splunk-ansible. When I run the playbook it creates the appropriate user-seed.conf but /opt/splunk/etc/passwd has no... See more...
I'm currently attempting to setup an environment using https://github.com/splunk/splunk-ansible. When I run the playbook it creates the appropriate user-seed.conf but /opt/splunk/etc/passwd has no user. There for when it tries to run the commands to turn a server into the cluster manager or a peer it isn't able to do it. Running Rocky Linux 8.
Trabalho na setor de TI do Santander, por onde começo para aprender o Splunk, meu interesse é aprender a colocar os dados no Splunk, fazer todo o monitoramento etc, nós trabalhamos com os painéis de ... See more...
Trabalho na setor de TI do Santander, por onde começo para aprender o Splunk, meu interesse é aprender a colocar os dados no Splunk, fazer todo o monitoramento etc, nós trabalhamos com os painéis de monitoramento em tempo real e em nuvens e AI, quero aprender a fazer todo esse monitoramento do inicio ao fim, entregado a informação em tempo real para os  departamentos. Tenho interesse em fazer o curso mais completo, talvez uma certificação, para esse tipo de ação, função. Tenho conhecimento de 10 anos de SAP e de 02 anos de Ariba. Esse curso pode ser feito em Português ? Quanto ganha um analista júnior, pleno e sênior de Splunk na pratica ?
Looking to build a report that would display/identity those hosts that are reporting into Forwarder Management but are not sending logs.  I can then know what hosts I need to troubleshoot.  
I'm trying to rename the IP's of our servers to splunknodes host_ip host_name ip-111-11-1-11 Searchhead ip-111-11-1-12 Searchhead ip-111-11-1-10 Masternode ip-111-11-2-11 Indexe... See more...
I'm trying to rename the IP's of our servers to splunknodes host_ip host_name ip-111-11-1-11 Searchhead ip-111-11-1-12 Searchhead ip-111-11-1-10 Masternode ip-111-11-2-11 Indexer ip-111-11-2-12 Indexer ip-111-11-2-10 Deploymentserver How do I get it to count the duplicates?: host_ip host_name ip-111-11-1-11 Searchhead1 ip-111-11-1-12 Searchhead2 ip-111-11-1-10 Masternode ip-111-11-2-11 Indexer1 ip-111-11-2-12 Indexer2 ip-111-11-2-10 Deploymentserver   Thanks in advance!