All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I found a link on the Splunk documentation about object types to be used in local.meta files But it seems either outdated or incomplete. In existing files for my project, I found the following obje... See more...
I found a link on the Splunk documentation about object types to be used in local.meta files But it seems either outdated or incomplete. In existing files for my project, I found the following object types: app authentication authorize collections commands distsearch indexes inputs lookup macros manager passwords props savedsearches server serverclass transforms views   But on the previous link, I found that only those are defined:   alert_actions app commands eventtypes html lookups savedsearches searchscripts views tags visualizations   Does anyone know where can I find an exhaustive object types definition ?   Thanks in advance
Hi Guys, which endpoint should I use to get the version of Splunk except /server/info. I don't want to use /server/info as it does not require authentication to hit. I want an endpoint that is au... See more...
Hi Guys, which endpoint should I use to get the version of Splunk except /server/info. I don't want to use /server/info as it does not require authentication to hit. I want an endpoint that is authentication protected. plus it should be both On-prem and cloud compatible. Thanks, Bhargav
Hi, I'm using the latest nodejs agent installed by Cluster Agent in AKS. I run NextJS application with the following script: "start:production": "npm-run-all --serial bootstrap next:build nex... See more...
Hi, I'm using the latest nodejs agent installed by Cluster Agent in AKS. I run NextJS application with the following script: "start:production": "npm-run-all --serial bootstrap next:build next:start" Looks like AppDynamics is trying to connect to the first command in the script in this scenario to "bootstrap" and it looks like it blocks the application from running the next one. It works fine with 1 command in the startup script, but with 2,3, etc it gets stuck on the first one Any idea how to get around the problem? ^ Edited by @Ryan.Paredez for some clarity
Hi all, can somebody please give me a hand w/ this. I would like to extract the timestamp from an Event like this:       Info1: ASDF Info2: QWE Info3: YXC Time: ... See more...
Hi all, can somebody please give me a hand w/ this. I would like to extract the timestamp from an Event like this:       Info1: ASDF Info2: QWE Info3: YXC Time: MON JAN 01 00:00:00 2022        Here is what I am using in props.conf. According to regex101, my TIME_PREFIX should be good, but it doesnt work (splunk uses the current time in the _time field). The fact that weekday and month are capitalized should not be problem.       TIME_FORMAT = %a %b %e %H:%M:%S %Y TIME_PREFIX = ^.*\n^.*\n^.*\n^Time:\s+        
I was wondering if it was possible to set up an alert to be something like – If there is a "errorcode=800" spike over X threshold for X minutes, trip the alert. Like, if it’s a prolonged spike ... See more...
I was wondering if it was possible to set up an alert to be something like – If there is a "errorcode=800" spike over X threshold for X minutes, trip the alert. Like, if it’s a prolonged spike that doesn’t go away or climbs in volume/frequency.  Thought I’d ask if this type of alert is even possible…?  
Hi all, I'm new to Dashboard Studio and I'm running into an issue. I have two dates in my dataset: a created_date and a closed_date. I want to count all created_date that lies in the specified time... See more...
Hi all, I'm new to Dashboard Studio and I'm running into an issue. I have two dates in my dataset: a created_date and a closed_date. I want to count all created_date that lies in the specified time picker range, and I want to do the same for closed_date. Splunk automatically creates tokens, $global_time.earliest$ and $global_time.latest$. I tried to compare using these tokens, but this doesn't work: | eval earliest = $global_time.earliest$ | eval latest = $global_time.latest$ | eval epochcreated_date =strptime(created_date, "%Y-%m-%dT%H:%M:%S.%3N%z") | where epochcreated_date>= earliest AND epochcreated_date<= latest I hope someone here can point me in the right direction. Thanks in advance.
Good Day Team, Are they any splunk walkthrough exercises with some data I can bend and manipulate as I learn these concepts and commands? I am a beginner so just going through the basics and it wou... See more...
Good Day Team, Are they any splunk walkthrough exercises with some data I can bend and manipulate as I learn these concepts and commands? I am a beginner so just going through the basics and it would be good to test out commands as I read. thanks
Hi All I have created below alert to capture the ERROR LOGS index=abc ns=blazegateway ERROR |rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d %H:%M:... See more...
Hi All I have created below alert to capture the ERROR LOGS index=abc ns=blazegateway ERROR |rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|cluster showcount=t t=0.4|table app_name, Error_Message ,cluster_count,_time, env, pod_name,ns|dedup Error_Message | rename app_name as APP_NAME, _time as Time, env as Environment, pod_name as Pod_Name, Error_Message as Error_Message,cluster_count as Count I am capturing on the basis of Keyword ERROR But I don't want INTERNAL SERVER TO captured in it. Currently it is capturing INTERNAL_SERVER_ERROR as well as I am fetching on the basis of ERROR keyword routeId:dmr_file_upload,destinationServiceURL:operation:dmrupload, serviceResponseStatus=Failure, routeResponseHttpStatusCode=500 INTERNAL_SERVER_ERROR, serviceResponseTime(ms)=253 Can someone guide me how to exclude INTERNAL_SERVER_ERROR from my alerts  
Hello, We just got our application pentest on Splunk, and there are many issues that pop up. These issues are: 1. SQL Injection ( 11299) 2. Insecure Transport ( 4722 ) 3. Credential Management: S... See more...
Hello, We just got our application pentest on Splunk, and there are many issues that pop up. These issues are: 1. SQL Injection ( 11299) 2. Insecure Transport ( 4722 ) 3. Credential Management: Sensitive Information Disclosure ( 10551) 4. Often Misused: Login ( 10595 ) 5. Password Management: Weak Password Policy ( 11496 ) 6. Often Misused: HTTP Method Override ( 11534 ) 7. Cookie Security: Persistent Cookie ( 4728 ) 8. Privacy Violation: HTTP GET ( 10965 ) 9. HTML5: Overly Permissive Message Posting Policy ( 11347 ) 10. HTTP Verb Tampering ( 11501 ) 11. Path Manipulation: Special Characters ( 11699 ) 3, 4, 5 and 7 I can manage but the others I don't know how to fix, because I'm only familiar with Splunk Web interface. I wanted to ask: 1. If I'm enable SSL and switch my Splunk web url from http -> https, can we fix HTTP relate issues which is 6, 8 10. 2. How can I fix other issues, they told us to screenshot evidences as well if we can't fix them, where do I look for it? Thank you in advance.
I want to search file by range of size assigned in the input but I'm not sure how. Example: I pick 50M in the choices because I want to search files that are having 50M to 199M size. Input Sour... See more...
I want to search file by range of size assigned in the input but I'm not sure how. Example: I pick 50M in the choices because I want to search files that are having 50M to 199M size. Input Source: </input> <input type="dropdown" token="size_tk"> <label>File Size:</label> <choice value="*">ALL</choice> <choice value="50M">50M</choice> <choice value="200M">200M</choice> <choice value="500M">500M</choice> <choice value="1G">1G</choice> <choice value="2G">2G</choice> <search> <query>index=tech_filesystem | makemv delim="," filesize | stats count by filesize</query> <earliest>rt-30s</earliest> <latest>rt</latest> </search>
Hello, I was trying to setup Alerting via emails and it wouldn't work. Alert for sure gets triggered, because other alert_action works (Add to triggered alerts) but the email one didn't. In "var\log... See more...
Hello, I was trying to setup Alerting via emails and it wouldn't work. Alert for sure gets triggered, because other alert_action works (Add to triggered alerts) but the email one didn't. In "var\log\splunk\python.log" I found out that for every trigger there is  error log Ex.: 2022-07-27 08:30:04,382 +0200 ERROR sendemail:1610 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/alerts/alert_actions/email?output_mode=json Traceback (most recent call last):   File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 1603, in <module>     results = sendEmail(results, settings, keywords, argvals)   File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 193, in sendEmail     responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey)   File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\rest\__init__.py", line 583, in simpleRequest     raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/alerts/alert_actions/email?output_mode=json I tried "| sendemail..." and it generates the same error there. What is supposed to be in the endpoint and how do I fix it?
Hi, I have a bar chart in trellis mode The categories (LABEL 1 and LABEL 2) are obtained by aggregation     | timechart span=1h limit=0 c(eval(type_produit="LABEL 1")) as "LABEL 1" c(e... See more...
Hi, I have a bar chart in trellis mode The categories (LABEL 1 and LABEL 2) are obtained by aggregation     | timechart span=1h limit=0 c(eval(type_produit="LABEL 1")) as "LABEL 1" c(eval(type_produit="LABEL 2")) as "LABEL 2" by VALEUR     I have drilldown on this trellis and when I click on a part of the barchart, I want to catch the category of my selection (LABEL 1 or LABEL 2). But I can't find the correct token to use. I try these one : $click.name$ => _time $click.name2$ => name of VALEUR field $click.value$ => 1658833200 (value of _time) $click.value2$ => value of VALEUR field $trellis.name$ => the token is not recognized (it stays $trellis.name$) $trellis.value$ => the token is not recognized (it stays $trellis.value$)   Any idea ? Thanks in advance
Hi  We are planning to upgrade Splunk to 8.2.6.1 but I am unable to find the release notes in the Splunk site. And what is the difference between 8.2.6.1 version with the latest splunk 9.0 version. ... See more...
Hi  We are planning to upgrade Splunk to 8.2.6.1 but I am unable to find the release notes in the Splunk site. And what is the difference between 8.2.6.1 version with the latest splunk 9.0 version. Current Splunk version is 8.2.2 (On-Prem)  Please provide me the exact link to find this details.  
I have been seeing containers without any artefacts with jira 'on poll'. After comparison against those successfully ingested and failed, it appears to be caused by attachment(s) on the jira tickets.... See more...
I have been seeing containers without any artefacts with jira 'on poll'. After comparison against those successfully ingested and failed, it appears to be caused by attachment(s) on the jira tickets. If all attachments are removed from the tickets, then the ingestion would succeed. This is seen for failures using manual poll. /builds/phantom/phantom/alchemy/connectors/spawn3/spawn3.cpp : 978 : JiraConnector :Downloading from: .../screenshot-1.png 2022-07-27T02:46:06.524696 PID:28743 TID:28743 : DEBUG: SPAWN3 : /builds/phantom/phantom/alchemy/connectors/spawn3/spawn3.cpp : 978 : JiraConnector :Could not connect to url to download attachment: Error Code:Error code unavailable. Error Message:HTTPSConnectionPool(host='domain', port=443): Max retries exceeded with url:  .../screenshot-1.png (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ff09d00b190>: Failed to establish a new connection: [Errno -2] Name or service not known'))   The asset is running through an automation broker , polling a on-prem Jira instance Jira; Publisher: SplunkApp; version: 3.4.0; Python version: 3; Product vendor: Atlassian
Is there a way to integrate Forcepoint One with Splunk
Hi! I have a stream of (Syslog) data coming from my Router via UDP into my workstation that is received and parsed by Splunk into fields, to identify (mostly) attempts to breach my router’s firewall... See more...
Hi! I have a stream of (Syslog) data coming from my Router via UDP into my workstation that is received and parsed by Splunk into fields, to identify (mostly) attempts to breach my router’s firewall by outside servers. Unfortunately the syslog stream consistently omits a space or Tab between the MAC address ‘item‘ and the source ip ‘item‘ like this: …OUT= MAC=00:00:00:00:00:00:8484:26:2b:9b:ea:bd:src=31.220.1.83… which causes all SRC IP address field to remain unparsed. If I could instruct Splunk to look instead for “:src=xx.xxx.xxx.xxx” or cleanse the data stream by converting all “:src=“ into “: src=“ (note the space) I think my Splunk Search will begin interpreting these ‘rogue’ Source IP fields and reveal some interesting attempts to access my network. Does anyone know how to adjust the parser in Splunk to look for things without a space or to cleanse the datastream before it is parsed? Thanks, Rob
Hi All, I see a strange issue on my Splunk, There is a scheduled alert to run every 15 minutes and I got an undeliverable alert for a user. When I go back to check, the user's email is not configur... See more...
Hi All, I see a strange issue on my Splunk, There is a scheduled alert to run every 15 minutes and I got an undeliverable alert for a user. When I go back to check, the user's email is not configured/maybe removed by someone but I still get the undeliverable email every time the schedule runs. Is there any place where I can check why it is triggering?
Hello everyone,  The time modifiers don't seem seem to work for this search, am I doing something wrong?  |union [search query.. earliest=-15m@m latest=now |join type=inner x[query..] |join type=... See more...
Hello everyone,  The time modifiers don't seem seem to work for this search, am I doing something wrong?  |union [search query.. earliest=-15m@m latest=now |join type=inner x[query..] |join type=inner x[query..] |dedup x |stats count(x) as total1] [search query.. earliest=-15m latest=now |join type=inner x[query..] |join type=inner x[query..] |join type=inner x[query..] |dedup x |stats count(x) as total2] [search query.. earliest=-1d-15m@m latest=-1d |join type=inner x[query..] |join type=inner x[query..] |dedup x |stats count(x) as total3] [search query.. earliest=-1d-15m@m latest=-1d join type=inner x[query..] |join type=inner x[query..] |join type=inner x[query..] |dedup x |stats count(x) as total4] |stats sum(total1) as eval1, sum(total2) as eval2, sum(total3) as eval3, sum(total4) as eval4 |eval y1=eval1-eval2 |eval y2=eval3-eval4 |eval z1=round((y1/eval1)*100, 2) |eval z1=round((y2/eval3)*100, 2) |table eval1, eval2, eval3, eval4, y1, y2, z1, z2   The sub searches with time modifiers in bold do not work and results in 0s in the output table. However, if i change the bold time modifiers to earliest=-15m@m latest=now, it works fine, but give me the same result of the fisrt 2 sub searches. Unsure as to why this is happening. 
Hi -  I am trying to get the Splunk App for AWS Security Dashboards working. Apparently the default index the app is using is "main".   I need to change this. I know I could change the index name ... See more...
Hi -  I am trying to get the Splunk App for AWS Security Dashboards working. Apparently the default index the app is using is "main".   I need to change this. I know I could change the index name by editing the xml but that would require a lot of changes. I am hoping someone knows where the central change location is located.   Thank you.
This is probably a stupid question where can I find the <host> for the HEC URI  <protocol>://<host>:<port>/<endpoint>  I am using the server name in the server.conf  for <host>but that isn't work... See more...
This is probably a stupid question where can I find the <host> for the HEC URI  <protocol>://<host>:<port>/<endpoint>  I am using the server name in the server.conf  for <host>but that isn't working. Also tried the IP address of my instance  and that isn't working either.  What am I missing?   Thanks