All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Morning, I wanted to ask about something I have read somewhere, but not entirely sure. I am a beginner in Splunk architecture. If our architecture uses a deployment server for configuration of... See more...
Good Morning, I wanted to ask about something I have read somewhere, but not entirely sure. I am a beginner in Splunk architecture. If our architecture uses a deployment server for configuration of universal forwarders and deployment of apps, what happens if the deployment server is decommissioned (or goes down)? will the universal forwarders lose their configuration? or nothing happens if the deployment server is "removed" from the architecture
Hi everyone, I've been trying several day to create a query that can give me the list of name/value  inside the JSON file the file has hundreds of event and each event has multiple name/value as th... See more...
Hi everyone, I've been trying several day to create a query that can give me the list of name/value  inside the JSON file the file has hundreds of event and each event has multiple name/value as the picture below I'm not able to create a table out of the name/value pair from each event as below can anyone help me or guide me in the right direction ? GarageId         GarageClassId              GarageTypeId 4 1 5 3 4 5 6 5 4     thanks a lot for your time and support          
Hi, We are using Splunk Stream to pull logs from DNS Servers. All the target Servers have similar naming convention and do show up under preview based on the Regex Rule for the group. But one of the... See more...
Hi, We are using Splunk Stream to pull logs from DNS Servers. All the target Servers have similar naming convention and do show up under preview based on the Regex Rule for the group. But one of them never becomes part of the group. This Server(003) ends up under defaultgroup. Preview lists all matched assets   003 is not added to the group Description for defaultgroup reads "Used when there is no matching group found for a given stream forwarder ID", but in this case 003 clearly matches a group along with others. Are there any other parameters apart from the name which might be playing a role here? Thanks, ~ Abhi
Hello all, On the splunkd health report, what is the difference between Search Lag & Delay? [ref: https://docs.splunk.com/images/e/ee/Splunkd_health_report_8.0.0.png] Our deployment has a high numbe... See more...
Hello all, On the splunkd health report, what is the difference between Search Lag & Delay? [ref: https://docs.splunk.com/images/e/ee/Splunkd_health_report_8.0.0.png] Our deployment has a high number of savedsearches that trigger on cron (every 5m, 15m, 30m, 1h etc) and we are working to minimise the concurrency by introducing Scheduler Window & Skew. I know exactly which searches are triggering beyond the scheduled time (dispatch_time - scheduled_time from the scheduler.log) and which searches are skipping. But I do not understand what Splunk signifies as Lag & Delay in terms of searches.. I have gone through the $SPLUNK_HOME/var/log/health.log & server/health/splunkd/details API endpoints but they give the same messages as the Health Indicator.. Thanks in advance!
Hi all, A past consultant of ours wrote the following correlation search to detect excessive user account lockouts: index=wineventlog EventCode=4740| stats count min(_time) as firstTime max(_time) ... See more...
Hi all, A past consultant of ours wrote the following correlation search to detect excessive user account lockouts: index=wineventlog EventCode=4740| stats count min(_time) as firstTime max(_time) as lastTime by user, signature | `ctime(firstTime)` | `ctime(lastTime)` | search count > 5   The results display the following: user signature count firstTime lastTime <user name> A user account was locked out <count> 01/07/2021 07:57:10 01/14/2021 02:56:51   The count above is a total of lockouts from different machines in our environment over a period of time.   How can I add an additional column to list the actual machine names causing the lockouts (this data would be taken from the particular field "dest_nt_domain") And is there a better way of doing this? ie) user signature count firstTime lastTime machines <user name> A user account was locked out <count> 01/07/2021 07:57:10 01/14/2021 02:56:51 <computer1>, <computer2>, ...  
Hi Everyone,   CAN SOMEONE PROVIDE ME THE QUERY FOR DISPLAYINY HARD CODED NUMERIC VALUE in splunk.   I just need to display any random numeric value .   Thanks in advance
Hi, I can perform searches without problem in user accounts and admin accounts without a problem. But dashboards not working for users(except admin). After further investigation, I found search jobs... See more...
Hi, I can perform searches without problem in user accounts and admin accounts without a problem. But dashboards not working for users(except admin). After further investigation, I found search jobs initiated by dashboards cannot be accessed by the owner. But those search jobs can be accessed from admin.  Because of this dashboards not working for non-admin users. This is what I found in search.log. {"success": false, "offset": 0, "count": 0, "total": 0, "messages": [{"type": "ERROR", "message": "job sid=test_user_dGVzdF91c2Vy__search__search1_1610628799.1913 not found", "time": "2021-01-14T12:53:26+0000"}], "data": null} Did anyone experience the same, please let me know what's wrong? Thanks Chandika
Hi everyone, I would like to ask how to redirect uf log to a specific index in indexer. I cannot modify the uf outputs.conf so is there any other method that can allow me to put the log in othe... See more...
Hi everyone, I would like to ask how to redirect uf log to a specific index in indexer. I cannot modify the uf outputs.conf so is there any other method that can allow me to put the log in other index rather than the main index(UF default setting).  the source type is WinEventLog Or is there any method that can control the outputs.conf in indexer level? assuming there is only one indexer and one uf
Hi I am searching for an option to dynamically assign value for MAXSPAN in a transaction. The value should come as a result of a LOOKUP. So far I have no success whatsoever. I have tried the propose... See more...
Hi I am searching for an option to dynamically assign value for MAXSPAN in a transaction. The value should come as a result of a LOOKUP. So far I have no success whatsoever. I have tried the proposed solution here: https://community.splunk.com/t5/Splunk-Search/How-to-dynamic-assign-variable-to-maxspan-and-span/m-p/398004 however this does not work for me, in particular the proposed solution "fixes" the maxspan to the value in the eval expression. which is 7m in this case. | makeresults | eval maxspan="7m" | map search="search index=_* | transaction host maxspan=$maxspan$" in essence i would like to be able to notify myself for related events that happen for a certain period of time, however that time and the number of event per each type is dynamically assigned as per the lookup.  here is an example of my search line: sourcetype=servername host=hostname |lookup flex_test f1 as f1 OUTPUT mx_span AS mx_span , ev_count AS ev_count |transaction f1 f2 maxspan={dynamic value should come here} |eval alert = if(eventcount>ev_count,"ev_ALERT","OK") |......   and here is an example of the lookup table (tried different formats) f1,mx_span,ev_count 34,1,5 35,60,10 36,2m,5 kind regards!  
Have been trying to create custom command, but it seems that getOrganizedResults() doesn't  doesn't seem to get the previous search results.  Just to test things i wrote this:  import sys,splunk.I... See more...
Have been trying to create custom command, but it seems that getOrganizedResults() doesn't  doesn't seem to get the previous search results.  Just to test things i wrote this:  import sys,splunk.Intersplunk # this call populates the results variable with all the events passed into the search script: results,dummyresults,settings = splunk.Intersplunk.getOrganizedResults() # hand the results right back to Splunk splunk.Intersplunk.outputResults(results) The data hadn't came back, the massage was: "No results found. Try expanding the time range." The command was added in commands.conf, I was authorized to use it.  Why can't getOrganizedResults get data? 
I have one machine with Splunk Enterprise and on another machines I've installed a universal forwarder. Even-though everything seems ok from the installation point of view, somehow  Spunk Enterprise ... See more...
I have one machine with Splunk Enterprise and on another machines I've installed a universal forwarder. Even-though everything seems ok from the installation point of view, somehow  Spunk Enterprise does not detect the forwarder: - In the Splunk Web interface I've  enabled receiving on port 9997 ( Splunk Web: Settings -> Forwarding and receiving ) - After installing the forwarder (linux) , I've started it from /bin : ./splunk start (accepted license) ./splunk add forward-server <splunk_web_server>:9997 # add a source ./splunk add monitor /var/log/auth.log -sourcetype linux_secure ./splunk restart Am I missing something ? Thx
I using collect command with post search in dashboard. I want to control collect data to summary index or not in dashboard, I created this dashboard. Panel 1:  search id="main" { main search and ... See more...
I using collect command with post search in dashboard. I want to control collect data to summary index or not in dashboard, I created this dashboard. Panel 1:  search id="main" { main search and table all fields that need. } Panel 2: input field token=collect_index single selection:        ・no summary :         ・to summary : | collect index=summary_index_name search base="main" { | table fields that need    |  where user=$drilldown_user$ (from Panel 1 with drilldown token)     | eval .... some process   $collect_index$ } But, the summary_index_name will have duplicates collected events. And I tried create 2 dedicate search panel in the dashboard, the collected events did not duplicates. Any suggestion ?   Thanks. --------------------------------------------------------------------------------------------------------------------------- こんにちは ダッシュボードの中に、サーチの結果をサマリーインデックスに書き込むかをコントロールしたいので、このダッシュボードを作りました。このダッシュボードの中に2つサーチパネルがあります。 パネル1: search id="main" { メインサーチで必要のフィルドを出します。} パネル2: input field token=collect_index シングル選択し:        ・no summary :         ・to summray : | collect index=サマリーインデックス名 search base="main" { | table 必要なフィルド    |  where user=$drilldown_user$ (パネル1からのトクンバリュー)     | eval などの処理    $collect_index$ } ただ、上記の post search で経由して、collect したデータは重複になってます。 index=_audit で確認すると、確かに1秒で同じのサーチ(run_collect)が複数回で実行していた。 そして、 別々のパネルで分けて独立のサーチを作ったことがありますが、独立のサーチは上記の問題発生しません。 index=_audit でも確認した、run_collect ただ1回しかないです。 アドバイスをいただければ助かります。 お手数をおかけしますが、どうぞよろしくお願いいたします。          
I have the field - DATE, for example: DATE: ^9F33006E0F848^00950108080008000^9F37008B1832B33^9F1E0163236353132303337^9F26016B9F12AB2FA191854^9F36003003F^00820041980^009C00200^9F1A0040643^009A00621... See more...
I have the field - DATE, for example: DATE: ^9F33006E0F848^00950108080008000^9F37008B1832B33^9F1E0163236353132303337^9F26016B9F12AB2FA191854^9F36003003F^00820041980^009C00200^9F1A0040643^009A006210114^9F02012000000058700^5F2A0040643^9F03012000000000000^9F2700280^9F340061F0002^9F3500222^0084014A0000006581010^9F090040002^9F41006023804^9F100640FA501A030002000ED085B191CB9E97C10040000000000000000000000000000 I need to cut only ^9F10*, for example: DATE: ^9F100640FA501A030002000ED085B191CB9E97C10040000000000000000000000000000 How to do it?
I am looking app which can monitor our splunk Dashboard like how many metrics are there, How many incidents are opened , closed & status of incidents.   Does splunk have that kind of app which can ... See more...
I am looking app which can monitor our splunk Dashboard like how many metrics are there, How many incidents are opened , closed & status of incidents.   Does splunk have that kind of app which can monitor operational process.   Thanks, Sahil
Hi all, im new in splunk, i was wondering if you can help me. This is  the scenario, im using inputlookup. I have csv  file with 2 fields field1 is original ip then field 2 is second ip. What i wante... See more...
Hi all, im new in splunk, i was wondering if you can help me. This is  the scenario, im using inputlookup. I have csv  file with 2 fields field1 is original ip then field 2 is second ip. What i wanted to do if the user get one of ip address in field 1 and  get any ip address in the field 2 then it will alert. But if the user only get ip address in field 1 and did not get ip address in field2 it will not alert. I have multiple ip address in field 1 and only 4 ip address in field 2. Thank you
Hi all, Why the count of  "Event per day" in the "Indexing audit" dashboard is not match with |tstats result?  Eg. The number from  "Event per day" in the "Indexing audit" dashboard: index count ... See more...
Hi all, Why the count of  "Event per day" in the "Indexing audit" dashboard is not match with |tstats result?  Eg. The number from  "Event per day" in the "Indexing audit" dashboard: index count  main 10000   The number from |tstats count where index=main by index: index count  main 500   May I know which one is correct?
Hi Team, I have one requirement. I have one TREND Chart where I am showing FAILURE ,SUCCESS AND Total Counts in a trend. The problem I am facing is I have one drop down "Build Result" which consis... See more...
Hi Team, I have one requirement. I have one TREND Chart where I am showing FAILURE ,SUCCESS AND Total Counts in a trend. The problem I am facing is I have one drop down "Build Result" which consists of 3 values AllBuildResult SUCCESS FAILURE the problem is when I am selecting " SUCCESS" from drop down the values are coming Right but its showing Total as label instead of SUCCESS. same is happening with FAILURE  as well. Below is my code: <row> <panel> <chart> <title>Jenkins Builds Deployment Report</title> <search> <query>index="abc" sourcetype="xyz" $orgname$ $buildresult$ | timechart span=1d count(BuildResult) by BuildResult useother=f limit=25|addtotals</query> <earliest>$field4.earliest$</earliest> <latest>$field4.latest$</latest> </search> <earliest>$field4.earliest$</earliest> <latest>$field4.latest$</latest> <sampleRatio>1</sampleRatio> </search--> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">Count</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.showMarkers">1</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="charting.lineDashStyle">longDash</option> <option name="height">400</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">OrgFolderName</option> </chart> </panel> </row> Can someone guide me on that.
I'm extracting data from a raw log and put it on a table, now I want to add a column that indicate the action that admin should take if a port is downed, it's kind of like this     Time|System|Dom... See more...
I'm extracting data from a raw log and put it on a table, now I want to add a column that indicate the action that admin should take if a port is downed, it's kind of like this     Time|System|Domain|Status |Action -------------------------------- XXX |XXX |XXX |DOWN |Call IT XXX |XXX |XXX |infiltrate|Call Security     In here, the Action field/column is a newly created data that not in the raw log but generated based on the Status's value, like "Call IT" if the Status is DOWN, or "Call Security" if Status is Infiltrate. Is there anyway to archive this?
Hi Team, We have Splunk Enterprise v7.2.9.1 and planning to upgrade to v8.1.1. Now, as a pre-requisite,  we will upgrade the  Splunk_TA_nix  v7.0.1 to  Splunk_TA_nix  v8.2.0 so that it will be compa... See more...
Hi Team, We have Splunk Enterprise v7.2.9.1 and planning to upgrade to v8.1.1. Now, as a pre-requisite,  we will upgrade the  Splunk_TA_nix  v7.0.1 to  Splunk_TA_nix  v8.2.0 so that it will be compatible to both Splunk Enterprise v7.2.9.1 and v8.1.1. In our environment, we still have Splunk Universal Forwarders who are in v7.0.4. With this, our question is, if we upgrade the Splunk_TA_nix  to v8.2.0, would it still work to Splunk Universal Forwarders v7.0.4?  In Splunkbase, Splunk_TA_nix v8.2.0 is only compatible to Splunk Version v7.2 or later. https://splunkbase.splunk.com/app/833/   Will Splunk_TA_nix v8.2.0 work with Splunk UF v7.0.4? Let me know your insights. Thanks
Hi Splunker, I tried install Eventgen app version 6.5.2 on Splunk Enterprise 8.1.1 under Windows 10 for testing. However, came out the error message: Unable to initialize modular input "modinput_e... See more...
Hi Splunker, I tried install Eventgen app version 6.5.2 on Splunk Enterprise 8.1.1 under Windows 10 for testing. However, came out the error message: Unable to initialize modular input "modinput_eventgen" defined in the app "SA-Eventgen": Introspecting scheme=modinput_eventgen: script running failed (exited with code 1).. Any idea to fix this issue? Thanks