All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search ... See more...
we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search peer is it possible.
I have two host. I need to compare the fields values. Field names are same for both the host.
Hi, I need to add a Role Restriction Search filter on a field which is only available in one index. My problem is that I am not sure the proper way to force this restriction on only this index... See more...
Hi, I need to add a Role Restriction Search filter on a field which is only available in one index. My problem is that I am not sure the proper way to force this restriction on only this index? If I add a restriction like this     "field_name"="field_value"   it works fine for the index containing the value but the others indexes return nothing.   If I add a restriction like this:   ((NOT "field_name"=* ) OR ( "field_name"="field_value"))   the result seems false. Do you have an idea of the correct field to restrict this field? Thanks, Regards, David  
How do access the splunk service from using an unbuntu OS? Yes i already enables boot at start up when i first installed it. PLEASE HELP!!!!!!
Architecture: Distributed - Deployment Server pushes changes to forwarders (1 hvyfwd, 2 universal) - Clustered indexers,  there is a cluster master    I made a change in inputs.conf file stanza t... See more...
Architecture: Distributed - Deployment Server pushes changes to forwarders (1 hvyfwd, 2 universal) - Clustered indexers,  there is a cluster master    I made a change in inputs.conf file stanza to edit IP address and hostname for one of the logsources. I tried to push that change to forwarders through restarting the server class on deployment server gui. However, I am unable to see those logs come in from that IP and source name.  Is there a step that I am missing?
Hi all, I'm currently using the Modular REST API to pull data from a REST API which allows me to specify the earliest time for events through an argument in the URL. I'm using the dynamic token f... See more...
Hi all, I'm currently using the Modular REST API to pull data from a REST API which allows me to specify the earliest time for events through an argument in the URL. I'm using the dynamic token functionality to put a unix timestamp into the URL, all works well. My python code in tokens.py just gets the current linux time and takes 80 seconds from it. My interval is then set to 60 seconds and in theory I shouldn't lose any data from the API. However the REST API Add-on seems to always issue the same request to the API. If I restart splunk then it seems to update and the API call uses the correct time, however then it just keeps using the same time, although the Python code should be updating. Here's the Python code. def eightySecondsAgo(): unixEpochTimeNow = time.time() timeEightySecondsAgo = int(unixEpochTimeNow) - 80 return str(timeEightySecondsAgo) Any my inputs.conf [rest://Intercom_admin_events] activation_key = <redacted> endpoint=https://api.intercom.io/admins/activity_logs?created_at_after=$eightySecondsAgo$ http_header_propertys = authorization=Bearer <redacted>,accept=application/json,content-type=application/json http_method = GET auth_type= none response_type = json streaming_request=0 verify=0 sourcetype=intercom.admin.events polling_interval=60 It's like the dynamic token response is being cached or something? Really strange. Any ideas?
Hello, I am currently testing Splunk for our Cisco backbone network and I would like to filter out two scenarios. 1.) When a power outage occurs, several power supply failures occur at the same t... See more...
Hello, I am currently testing Splunk for our Cisco backbone network and I would like to filter out two scenarios. 1.) When a power outage occurs, several power supply failures occur at the same time 2.) When there is a fiber cut, multiple pseudowires go down at the same time How can I filter these reasonably so that such a failure is detected immediately? Thanks a lot
Hi, we're looking into bulding use case for hashicorp vault. We've had a brief look at their app that is mentioned here:  https://www.hashicorp.com/blog/splunk-app-for-monitoring-hashicorp-vault ... See more...
Hi, we're looking into bulding use case for hashicorp vault. We've had a brief look at their app that is mentioned here:  https://www.hashicorp.com/blog/splunk-app-for-monitoring-hashicorp-vault The app seems to offer quite some insight into operational data from vault. We're interested in use cases that could show a potential abuse of vault. Does anyone have any experience or hints  so we do not have to start from scratch? Thanks Chris
Hi, I have about 100 rules and I want to count the number of logs are related to each rule. When I used "stats count" it counted those rules that have 1 or more logs, but didn't show all the rule... See more...
Hi, I have about 100 rules and I want to count the number of logs are related to each rule. When I used "stats count" it counted those rules that have 1 or more logs, but didn't show all the rules with zero hits. I tried to import csv file that contains all the rules and to remove the rows that contains rules with 1 or more hits. Moreover, I tried the suggestion here with no luck: Solved: Using Splunk to Find Unused Firewall Policies - Splunk Community Any suggestion? Thanks
I found a link on the Splunk documentation about object types to be used in local.meta files But it seems either outdated or incomplete. In existing files for my project, I found the following obje... See more...
I found a link on the Splunk documentation about object types to be used in local.meta files But it seems either outdated or incomplete. In existing files for my project, I found the following object types: app authentication authorize collections commands distsearch indexes inputs lookup macros manager passwords props savedsearches server serverclass transforms views   But on the previous link, I found that only those are defined:   alert_actions app commands eventtypes html lookups savedsearches searchscripts views tags visualizations   Does anyone know where can I find an exhaustive object types definition ?   Thanks in advance
Hi Guys, which endpoint should I use to get the version of Splunk except /server/info. I don't want to use /server/info as it does not require authentication to hit. I want an endpoint that is au... See more...
Hi Guys, which endpoint should I use to get the version of Splunk except /server/info. I don't want to use /server/info as it does not require authentication to hit. I want an endpoint that is authentication protected. plus it should be both On-prem and cloud compatible. Thanks, Bhargav
Hi, I'm using the latest nodejs agent installed by Cluster Agent in AKS. I run NextJS application with the following script: "start:production": "npm-run-all --serial bootstrap next:build nex... See more...
Hi, I'm using the latest nodejs agent installed by Cluster Agent in AKS. I run NextJS application with the following script: "start:production": "npm-run-all --serial bootstrap next:build next:start" Looks like AppDynamics is trying to connect to the first command in the script in this scenario to "bootstrap" and it looks like it blocks the application from running the next one. It works fine with 1 command in the startup script, but with 2,3, etc it gets stuck on the first one Any idea how to get around the problem? ^ Edited by @Ryan.Paredez for some clarity
Hi all, can somebody please give me a hand w/ this. I would like to extract the timestamp from an Event like this:       Info1: ASDF Info2: QWE Info3: YXC Time: ... See more...
Hi all, can somebody please give me a hand w/ this. I would like to extract the timestamp from an Event like this:       Info1: ASDF Info2: QWE Info3: YXC Time: MON JAN 01 00:00:00 2022        Here is what I am using in props.conf. According to regex101, my TIME_PREFIX should be good, but it doesnt work (splunk uses the current time in the _time field). The fact that weekday and month are capitalized should not be problem.       TIME_FORMAT = %a %b %e %H:%M:%S %Y TIME_PREFIX = ^.*\n^.*\n^.*\n^Time:\s+        
I was wondering if it was possible to set up an alert to be something like – If there is a "errorcode=800" spike over X threshold for X minutes, trip the alert. Like, if it’s a prolonged spike ... See more...
I was wondering if it was possible to set up an alert to be something like – If there is a "errorcode=800" spike over X threshold for X minutes, trip the alert. Like, if it’s a prolonged spike that doesn’t go away or climbs in volume/frequency.  Thought I’d ask if this type of alert is even possible…?  
Hi all, I'm new to Dashboard Studio and I'm running into an issue. I have two dates in my dataset: a created_date and a closed_date. I want to count all created_date that lies in the specified time... See more...
Hi all, I'm new to Dashboard Studio and I'm running into an issue. I have two dates in my dataset: a created_date and a closed_date. I want to count all created_date that lies in the specified time picker range, and I want to do the same for closed_date. Splunk automatically creates tokens, $global_time.earliest$ and $global_time.latest$. I tried to compare using these tokens, but this doesn't work: | eval earliest = $global_time.earliest$ | eval latest = $global_time.latest$ | eval epochcreated_date =strptime(created_date, "%Y-%m-%dT%H:%M:%S.%3N%z") | where epochcreated_date>= earliest AND epochcreated_date<= latest I hope someone here can point me in the right direction. Thanks in advance.
Good Day Team, Are they any splunk walkthrough exercises with some data I can bend and manipulate as I learn these concepts and commands? I am a beginner so just going through the basics and it wou... See more...
Good Day Team, Are they any splunk walkthrough exercises with some data I can bend and manipulate as I learn these concepts and commands? I am a beginner so just going through the basics and it would be good to test out commands as I read. thanks
Hi All I have created below alert to capture the ERROR LOGS index=abc ns=blazegateway ERROR |rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d %H:%M:... See more...
Hi All I have created below alert to capture the ERROR LOGS index=abc ns=blazegateway ERROR |rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)" |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|cluster showcount=t t=0.4|table app_name, Error_Message ,cluster_count,_time, env, pod_name,ns|dedup Error_Message | rename app_name as APP_NAME, _time as Time, env as Environment, pod_name as Pod_Name, Error_Message as Error_Message,cluster_count as Count I am capturing on the basis of Keyword ERROR But I don't want INTERNAL SERVER TO captured in it. Currently it is capturing INTERNAL_SERVER_ERROR as well as I am fetching on the basis of ERROR keyword routeId:dmr_file_upload,destinationServiceURL:operation:dmrupload, serviceResponseStatus=Failure, routeResponseHttpStatusCode=500 INTERNAL_SERVER_ERROR, serviceResponseTime(ms)=253 Can someone guide me how to exclude INTERNAL_SERVER_ERROR from my alerts  
Hello, We just got our application pentest on Splunk, and there are many issues that pop up. These issues are: 1. SQL Injection ( 11299) 2. Insecure Transport ( 4722 ) 3. Credential Management: S... See more...
Hello, We just got our application pentest on Splunk, and there are many issues that pop up. These issues are: 1. SQL Injection ( 11299) 2. Insecure Transport ( 4722 ) 3. Credential Management: Sensitive Information Disclosure ( 10551) 4. Often Misused: Login ( 10595 ) 5. Password Management: Weak Password Policy ( 11496 ) 6. Often Misused: HTTP Method Override ( 11534 ) 7. Cookie Security: Persistent Cookie ( 4728 ) 8. Privacy Violation: HTTP GET ( 10965 ) 9. HTML5: Overly Permissive Message Posting Policy ( 11347 ) 10. HTTP Verb Tampering ( 11501 ) 11. Path Manipulation: Special Characters ( 11699 ) 3, 4, 5 and 7 I can manage but the others I don't know how to fix, because I'm only familiar with Splunk Web interface. I wanted to ask: 1. If I'm enable SSL and switch my Splunk web url from http -> https, can we fix HTTP relate issues which is 6, 8 10. 2. How can I fix other issues, they told us to screenshot evidences as well if we can't fix them, where do I look for it? Thank you in advance.
I want to search file by range of size assigned in the input but I'm not sure how. Example: I pick 50M in the choices because I want to search files that are having 50M to 199M size. Input Sour... See more...
I want to search file by range of size assigned in the input but I'm not sure how. Example: I pick 50M in the choices because I want to search files that are having 50M to 199M size. Input Source: </input> <input type="dropdown" token="size_tk"> <label>File Size:</label> <choice value="*">ALL</choice> <choice value="50M">50M</choice> <choice value="200M">200M</choice> <choice value="500M">500M</choice> <choice value="1G">1G</choice> <choice value="2G">2G</choice> <search> <query>index=tech_filesystem | makemv delim="," filesize | stats count by filesize</query> <earliest>rt-30s</earliest> <latest>rt</latest> </search>
Hello, I was trying to setup Alerting via emails and it wouldn't work. Alert for sure gets triggered, because other alert_action works (Add to triggered alerts) but the email one didn't. In "var\log... See more...
Hello, I was trying to setup Alerting via emails and it wouldn't work. Alert for sure gets triggered, because other alert_action works (Add to triggered alerts) but the email one didn't. In "var\log\splunk\python.log" I found out that for every trigger there is  error log Ex.: 2022-07-27 08:30:04,382 +0200 ERROR sendemail:1610 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/alerts/alert_actions/email?output_mode=json Traceback (most recent call last):   File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 1603, in <module>     results = sendEmail(results, settings, keywords, argvals)   File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 193, in sendEmail     responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey)   File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\rest\__init__.py", line 583, in simpleRequest     raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/alerts/alert_actions/email?output_mode=json I tried "| sendemail..." and it generates the same error there. What is supposed to be in the endpoint and how do I fix it?