All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I tried to install Splunk using below commands in powershell, which installs without any issue and the service runs in services.msc msiexec.exe /i <location of .msi file>  AGREETOLICENSE=Yes SPLUNKU... See more...
I tried to install Splunk using below commands in powershell, which installs without any issue and the service runs in services.msc msiexec.exe /i <location of .msi file>  AGREETOLICENSE=Yes SPLUNKUSERNAME=admin SPLUNKPASSWORD=PASSWORD1 LAUNCHSPLUNK=1 /qb But using the same command, when I run this in AWS, it still pics Windows AMI, but not the Splunk AMI.  Can anyone advise?    
We have data fields in the format, for example, 12Jun22 I need to format like 12-06-2022  as shown in the below table: date expected format 12Jun22 12-06-20222 13Jun22 ... See more...
We have data fields in the format, for example, 12Jun22 I need to format like 12-06-2022  as shown in the below table: date expected format 12Jun22 12-06-20222 13Jun22 13-06-2022
I am trying to compare avg_rt for uWSGI workers for the last 15 mins and the last 7 days and then get a percentage out of it. If the difference is more than 50% then I want to trigger an alert. Her... See more...
I am trying to compare avg_rt for uWSGI workers for the last 15 mins and the last 7 days and then get a percentage out of it. If the difference is more than 50% then I want to trigger an alert. Here is my search host="prod-web-02" source="/var/log/uwsgi/app/uwsgi-metrics.log" earliest=-7d latest=now | stats avg(avg_rt) AS seven_days | append [ search host="prod-web-02" source="/var/log/uwsgi/app/uwsgi-metrics.log" earliest=-15m latest=now | stats avg(avg_rt) AS fifteen_mins ] | eval Result = (( fifteen_mins / seven_days ) * 100 ) | where Result > 50 I am unable to get a Result for whatever number I choose. it is not able to execute this part | eval Result = (( fifteen_mins / seven_days ) * 100 ) | where Result > 50 I am getting values for fifteen_mins and seven_days  seven_days                           fifteen_mins 320588.43640873017     360114.4
Hello, My alert produces a table like this:   Time |ID | FILE_NAME |STATUS _time1 |3 |file1.csv |SUCCESS _time2 |5 |file2.csv |DATA_ERROR     I want to send an Inline table that onl... See more...
Hello, My alert produces a table like this:   Time |ID | FILE_NAME |STATUS _time1 |3 |file1.csv |SUCCESS _time2 |5 |file2.csv |DATA_ERROR     I want to send an Inline table that only contains STATUS=DATA_ERROR. But in the body of the email, I still want to use the token $result.Time$ and $result.FILE_NAME$ from the STATUS=SUCCESS. Email body example: 1. File name success detail: File name: file1.csv Effective time: _time1 2. Data error detail: ID |FILE_NAME|STATUS 5  |file2.scv        |DATA_ERROR So it's basically, hide the STATUS=SUCCESS row- but still use its values in the token email. Thank you in advance
Find large CSV lookups above 400 mb (500 mb limit) : | rest splunk_server=* /servicesNS/-/-/data/transforms/lookups getsize=true f=size f=title f=type f=filename f=eai*|fields splunk_server filenam... See more...
Find large CSV lookups above 400 mb (500 mb limit) : | rest splunk_server=* /servicesNS/-/-/data/transforms/lookups getsize=true f=size f=title f=type f=filename f=eai*|fields splunk_server filename title type size eai:appName |where isnotnull(size)|eval KB = round(size / 1024, 2)|fields - size | sort - KB | search KB>400000   Use this to reduce CSV lookup (example) : | inputlookup file.csv | eval time_epoch = strftime(_time,"%s") | where time_epoch>relative_time(now(),"-100d@d") | outputlookup file.csv append=false  
I have the below query, I need the scatter point visualization for this. time on the x axis and the build duration  on the y axis for different job url as labels How to achieve this. index="maas-01... See more...
I have the below query, I need the scatter point visualization for this. time on the x axis and the build duration  on the y axis for different job url as labels How to achieve this. index="maas-01" sourcetype="jenkins_run:pipeline/describe" source=* "content.stages{}.stage_name"="build:execute" |rename content.stages{}.stage_duration_sec as duration content.stages{}.stage_name as name content.build_id as id | eval trimed_source = trim (source, "jenkins_run:/job/") | eval job_url = substr(trimed_source, 1, len(trimed_source )-2) |search job_url IN ($_job_url$) | table id _time name duration job_url | eval res=mvzip(name, duration) | eval name=mvindex(name, mvfind(res, "^build:execute.+")), duration=mvindex(duration, mvfind(res, "^build:execute.+")) | eval time=strptime(strftime(_time, "%Y-%m-%d %H:%M:%S.%N"),"%Y-%m-%d %H:%M:%S.%N") |eval bEx_Duration_minutes=round(duration/60, 2) | fields job_url time bEx_Duration_minutes I just need the time in human readable format , not any epoch number.  Any possibility of using scatter plot for above query with default _time? or is there any other way we can do this. Below is the visualisation which is getting generated. Need the output like below only but with readable date and time or Date only.    
Hi all, We are upgrading one of our environments from Splunk 8.2.0 to Splunk 9.0. We have an issue once we tried to upgrade the indexers, where the Splunk upgrade process got stuck at this point: ... See more...
Hi all, We are upgrading one of our environments from Splunk 8.2.0 to Splunk 9.0. We have an issue once we tried to upgrade the indexers, where the Splunk upgrade process got stuck at this point: I was looking onn the internet, but I cannot see anything related to this issue. Can someone help me with this? Many thanks in advance. Best regards.
Hi  We face a challenge We have created one alert in which we are monitoring one of the windows service (cloud gate way service) So basically if this service is not running or stopped splunk wi... See more...
Hi  We face a challenge We have created one alert in which we are monitoring one of the windows service (cloud gate way service) So basically if this service is not running or stopped splunk will trigger an alert for that.   Wanted to check if any possibility is there that if Splunk trigger such type of alert then to resolve the same Splunk will go to that server , login the server and will restart the service   We have identified one solution for this  By excute the alert action using the script  MAY I know where we can set the script (host=CSG196) can we deploy the script in host Can anyone suggest to resolve this issue
Hello All, I have a problem with my search. The following search works:   index=test_index sourcetype=test_sourcetype | search Modulename IN ("Test_One","Test_Two")    However, this sear... See more...
Hello All, I have a problem with my search. The following search works:   index=test_index sourcetype=test_sourcetype | search Modulename IN ("Test_One","Test_Two")    However, this search does not work:   index=test_index sourcetype=test_sourcetype | eval helper_modulename = replace("Test_One&form.Modulename=Test_Two", "&form.Modulename=", "\",\"") | eval helper_modulename = "\"" . helper_modulename . "\"" | search Modulename IN (helper_modulename)   The result of helper_modulename is the same string I use in the search that works: Can anyone tell me what I am doing wrong and what needs to be adapted to make it work? Thank you all in advance!
Hello, I need to create a search that will display results based on a specific value. My issue is that the following search does not return any result. In penultimate line, when I replace user_i... See more...
Hello, I need to create a search that will display results based on a specific value. My issue is that the following search does not return any result. In penultimate line, when I replace user_ip by index_field1="1.2.3.4" it works and when I remove both last lines I can see user_ip well contains "1.2.3.4"... But index_field1=user_ip does not match, same for index_field2... index=... | eval field1="1.2.3.4:100" | rex field=src_ip_port "(?<user_ip>.+)\:(?<user_port>.+)" | table user_ip user_port | search index_field1=user_ip index_field2=user_port | table index_field1 index_field2 user_ip user_port Thanks by advance for your feedback.
Has anyone worked on  dynamic plotting/highlighting some areas in a custom image based on splunk SPL condition.
Hi All, I am trying to monitor files and folders in network path using a basic (only the outline) Python script shown below. My Splunk environment is currently having Python version 2.7.  On runnin... See more...
Hi All, I am trying to monitor files and folders in network path using a basic (only the outline) Python script shown below. My Splunk environment is currently having Python version 2.7.  On running this script - getting an error like "no module named requests"/ "no module named time". Fairly new into Splunk, so not sure how to integrate modules into the environment. Also Pip install requests command also failing with error. Could you please help with the workaround for the same. Thanks in advance! import requests import os import time #path of network directory path =(input()) x = os.listdir(path) creation_time = os.path.getctime(path) local_time = time.ctime(creation_time) print("Creation Time:", local_time) # to read lines of text as a list with open(path) as f:     line=f.readlines() print(line) #Number of files or folders in network directory: num = len(x) print("Number of files/folders in directory:", num)  
Hi  I want to deploy a new configuration in inputs.conf file for a specific server from the deployment server.   Can anyone suggest some detailed steps?   Thanks in advance         
In Splunk dashboard I will show one table.  In this table column "host" I want apply CSS condition based on renderer via jQuery. It also worked.    But problem was I want pass arguments from simple x... See more...
In Splunk dashboard I will show one table.  In this table column "host" I want apply CSS condition based on renderer via jQuery. It also worked.    But problem was I want pass arguments from simple xml. Now hard code in jQuery below is table for reference,      In This jQuery function customeCssLoad() I want pass args from simple xml to that function. Please help me 
How can we extract the list of items from below code:   <s:key name="read"> <s:list> <s:item>rest_poc</s:item> <s:item>poc</s:it... See more...
How can we extract the list of items from below code:   <s:key name="read"> <s:list> <s:item>rest_poc</s:item> <s:item>poc</s:item>   Required output should be list of items for key name "read". Example: rest_proc poc   <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>index=sample_idx | stats count</title> <id>https://hostname.com.au:8089/services/search/jobs/1528783903.136065</id> <updated>2018-06-12T16:14:02.544+10:00</updated> <link href="/services/search/jobs/1528783903.136065" rel="alternate"/> <published>2018-06-12T16:11:43.000+10:00</published> <link href="/services/search/jobs/1528783903.136065/search.log" rel="search.log"/> <link href="/services/search/jobs/1528783903.136065/events" rel="events"/> <link href="/services/search/jobs/1528783903.136065/results" rel="results"/> <link href="/services/search/jobs/1528783903.136065/results_preview" rel="results_preview"/> <link href="/services/search/jobs/1528783903.136065/timeline" rel="timeline"/> <link href="/services/search/jobs/1528783903.136065/summary" rel="summary"/> <link href="/services/search/jobs/1528783903.136065/control" rel="control"/> <author> <name>rest_poc</name> </author> <content type="text/xml"> <s:dict> <s:key name="canSummarize">0</s:key> <s:key name="cursorTime">2038-01-19T14:14:07.000+11:00</s:key> <s:key name="defaultSaveTTL">604800</s:key> <s:key name="defaultTTL">300</s:key> <s:key name="delegate"></s:key> <s:key name="diskUsage">65536</s:key> <s:key name="dispatchState">DONE</s:key> <s:key name="doneProgress">1.00000</s:key> <s:key name="dropCount">0</s:key> <s:key name="earliestTime">1970-01-01T10:00:00.000+10:00</s:key> <s:key name="eventAvailableCount">0</s:key> <s:key name="eventCount">0</s:key> <s:key name="eventFieldCount">0</s:key> <s:key name="eventIsStreaming">1</s:key> <s:key name="eventIsTruncated">1</s:key> <s:key name="eventSearch"></s:key> <s:key name="eventSorting">desc</s:key> <s:key name="isBatchModeSearch">0</s:key> <s:key name="isDone">1</s:key> <s:key name="isEventsPreviewEnabled">0</s:key> <s:key name="isFailed">0</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isPaused">0</s:key> <s:key name="isPreviewEnabled">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isRemoteTimeline">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isSavedSearch">0</s:key> <s:key name="isTimeCursored">0</s:key> <s:key name="isZombie">0</s:key> <s:key name="keywords"></s:key> <s:key name="label"></s:key> <s:key name="normalizedSearch"></s:key> <s:key name="numPreviews">0</s:key> <s:key name="optimizedSearch">index=sample_idx | stats count</s:key> <s:key name="pid">9035</s:key> <s:key name="pid">9035</s:key> <s:key name="priority">5</s:key> <s:key name="provenance"></s:key> <s:key name="remoteSearch"></s:key> <s:key name="reportSearch">index=sample_idx | stats count</s:key> <s:key name="resultCount">5</s:key> <s:key name="resultIsStreaming">0</s:key> <s:key name="resultPreviewCount">5</s:key> <s:key name="runDuration">0.015</s:key> <s:key name="sampleRatio">1</s:key> <s:key name="sampleSeed">0</s:key> <s:key name="scanCount">0</s:key> <s:key name="searchCanBeEventType">0</s:key> <s:key name="searchTotalBucketsCount">0</s:key> <s:key name="searchTotalEliminatedBucketsCount">0</s:key> <s:key name="sid">1528783903.136065</s:key> <s:key name="statusBuckets">0</s:key> <s:key name="ttl">300</s:key> <s:key name="performance"> <s:dict> <s:key name="command.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">35</s:key> <s:key name="output_count">5</s:key> </s:dict> </s:key> <s:key name="command.inputlookup"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">172</s:key> </s:dict> </s:key> <s:key name="command.stats"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">35</s:key> </s:dict> </s:key> <s:key name="dispatch.check_disk_usage"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.createdSearchResultInfrastructure"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.inputlookup"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.stats"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.FinalEval"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.matchReportAcceleration"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.optimization"> <s:dict> <s:key name="duration_secs">0.006</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.reparse"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.toJson"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.optimize.toSpl"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.writeStatus"> <s:dict> <s:key name="duration_secs">0.007</s:key> <s:key name="invocations">7</s:key> </s:dict> </s:key> <s:key name="startup.configuration"> <s:dict> <s:key name="duration_secs">0.089</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="startup.handoff"> <s:dict> <s:key name="duration_secs">0.003</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="fieldMetadataStatic"> <s:dict> <s:key name="Description"> <s:dict> <s:key name="type">unknown</s:key> <s:key name="groupby_rank">0</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="fieldMetadataResults"> <s:dict> <s:key name="Description"> <s:dict> <s:key name="type">unknown</s:key> <s:key name="groupby_rank">0</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="messages"> <s:dict/> </s:key> <s:key name="request"> <s:dict> <s:key name="search">index=sample_idx | stats count</s:key> </s:dict> </s:key> <s:key name="runtime"> <s:dict> <s:key name="auto_cancel">0</s:key> <s:key name="auto_pause">0</s:key> </s:dict> </s:key> <s:key name="eai:acl"> <s:dict> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>rest_poc</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>rest_poc</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="owner">rest_poc</s:key> <s:key name="modifiable">1</s:key> <s:key name="sharing">global</s:key> <s:key name="app">search</s:key> <s:key name="can_write">1</s:key> <s:key name="ttl">300</s:key> </s:dict> </s:key> <s:key name="searchProviders"> <s:list/> </s:key> </s:dict> </content> </entry>   Please help, thanks in advance.
Hi, First time I have ever seen this, but curious if its just me. I have a search defined as: <search id="device_base_index"> <query> index=oi sourcetype=device earliest=-30d@d latest=+2d@d ... See more...
Hi, First time I have ever seen this, but curious if its just me. I have a search defined as: <search id="device_base_index"> <query> index=oi sourcetype=device earliest=-30d@d latest=+2d@d </query> </search> And a table as: <table> <title>Data Readiness</title> <search base="device_base_index"> <query>fields deviceId inventoryStatus configStatus | eval ic=configStatus+"::"+inventoryStatus | makemv delim="::" ic | mvexpand ic | streamstats count by deviceId | eval status=if(count = 1, "config", "inventory") | fields deviceId status ic | chart count over status by ic</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> The dashboard only shows the results from the base_search and doesnt include the results as if it was passed through the  the table part of the query.  When I click on the magnifying glass, it loads up the full search - so I know the query and base search are attached at some point. The other strange thing is when I look at the log, it only shows the base search: Job Details Dashboard OptimizedSearch: | search (earliest=-30d@d index=oi latest=+2d@d sourcetype=device) But in the search.log it does see both parts of the full query: Expanded index search = (index=oi sourcetype=device _time>=1653314400.000 _time<1656079200.000) base lispy: [ AND index::oi sourcetype::device ]  But then it sees the other part of the query: PARSING: postprocess "fields deviceId inventoryStatus configStatus etc... search.log contains no ERROR messages.   If I add the query to the table and dont use the base-search it all runs fine.   Any Ideas why the base search and table query are not executed and only the base_search part is executed?   cheers -brett
Anyone know where I can request an annual application security assessment report for Splunk product? I am looking for splunk UBA latest security assessment  such as  DAST , SAST  or Pentest report.... See more...
Anyone know where I can request an annual application security assessment report for Splunk product? I am looking for splunk UBA latest security assessment  such as  DAST , SAST  or Pentest report.   Thank you
When I am using Splunk Web to perform a date-range (or date and time range) search, the Date Picker is in the US date format (MM/DD/YYYY) rather than the Rest Of The World Format (DD/MM/YYYY), even t... See more...
When I am using Splunk Web to perform a date-range (or date and time range) search, the Date Picker is in the US date format (MM/DD/YYYY) rather than the Rest Of The World Format (DD/MM/YYYY), even though I am logged in to Splunk with the en-GB locale/URL path. Everything else works correctly as far as I can tell for en-GB settings, but, the date picker for Date Range and Date and Time Range, seems stuck in the wrong format. Here's an example of what I am seeing: Date Picker   As you can see, it's MM/DD/YYYY. I've tried multiple browsers and operating systems, and the issue very much seems to be server-side, somewhere. This is on Splunk Enterprise 8.2.5. However, I should add that this is only impacting some newer servers (1-2 years old) that I've set up; systems running other operating systems, but, also on 8.2.5, are not exhibiting this behaviour (either in the Search Head or if I go directly to an Indexer). I've also tried creating new/test users, and it impacts them as well, and there's nothing obvious in my user-prefs.conf that seems to relate to this, either on the working systems, or the non-working. This must be so stunningly obvious that I'm just missing it.  Any suggestions appreciated!  
So we are seeing some error pertaining to our cluster master being an unhealthy instance, we have a link called Generate Diag but when we click it the message reads: The app "splunk_rapid_diag" is no... See more...
So we are seeing some error pertaining to our cluster master being an unhealthy instance, we have a link called Generate Diag but when we click it the message reads: The app "splunk_rapid_diag" is not available we are on version 9.0.0 of Splunk Enterprise, the following Splunk Documentation says how to get to it, but we do not see that option https://docs.splunk.com/Documentation/Splunk/9.0.0/Troubleshooting/Rapiddiag How do I access RapidDiag? The RapidDiag UI is located in the Settings menu, under System > RapidDiag.  
Dear Community, How do I display values from second dropdown values based on first dropdown value.   <input type="dropdown" token="site" searchWhenChanged="true"> | inputlookup regions_instan... See more...
Dear Community, How do I display values from second dropdown values based on first dropdown value.   <input type="dropdown" token="site" searchWhenChanged="true"> | inputlookup regions_instances.csv | fields region region_value   <input type="dropdown" token="instance" searchWhenChanged="true"> | inputlookup regions_instances.csv | search region=$site$ | fields  instance instance_value