All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When adding logs from forwarders [Add Data - Select Forwarders], the list of available hosts appears to be random. We have hundreds of servers, having to scroll though pages of servers names that all... See more...
When adding logs from forwarders [Add Data - Select Forwarders], the list of available hosts appears to be random. We have hundreds of servers, having to scroll though pages of servers names that all look the same to find the one you want is a royal pain.  Is there any way to have that list appear sorted to make the selection process easier? Thanks.
I have a search that uses the transaction command:   | transaction startswith=<...> endswith=<...>   To group it into certain events I want to see. How would I search this even further to get the... See more...
I have a search that uses the transaction command:   | transaction startswith=<...> endswith=<...>   To group it into certain events I want to see. How would I search this even further to get the time difference between each event in this transaction and then graph these time differences to a line/bar graph with the events/hosts on X-axis and time on y-axis. There are no specific fields for each event that I want to use to calculate the time difference, I only want to show the time difference between each and every raw log in this transaction.
Hello, We have upgraded Lookup editor app to latest version but looks like there is a bug when editing the lookup. When I am changing some thing in lookup and try to save lookup those changes are n... See more...
Hello, We have upgraded Lookup editor app to latest version but looks like there is a bug when editing the lookup. When I am changing some thing in lookup and try to save lookup those changes are not applied. (when changing one cell value at a time then its working chaning multiple cell having problem) Recently new version released (3.4.2) with fix of this issue but its not working either.  I have tried to install older version (3.3.3 , 3.4.0, and 3.4.1) and having the same issue. Just wanted to check if this bug is still there in these version or I am missing some thing. Splunk Enterprise Version : 8.0.3 Update : Looks like this issue is only with Enterprise 8.0.3 version. I just tested it with 8.0.1 and it was working fine. when upgraded again on 8.0.3 its not working .   Thanks  
I have tried putting the following in the props.conf file: NO_BINARY_CHECK = true and NO_BINARY_CHECK = 1 and CHARSET=AUTO   But none of these worked.  I did them in the props.conf file.  But ... See more...
I have tried putting the following in the props.conf file: NO_BINARY_CHECK = true and NO_BINARY_CHECK = 1 and CHARSET=AUTO   But none of these worked.  I did them in the props.conf file.  But when I look at the splunkd.log file, the Splunk still thinks of the file as binary.  How can I fix this?    
SITUATION I'd like to track the changes done to lookup tables. I observe this helpful post: "https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-the-Audit-for-Lookup-files-modificat... See more...
SITUATION I'd like to track the changes done to lookup tables. I observe this helpful post: "https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-get-the-Audit-for-Lookup-files-modification-using-the/td-p/119363" I observe the below sourcetypes having "Lookup edited successfully" lookup_editor_rest_handler lookup_editor_controller I observe both having "Lookup edited successfully" I observe both having unique event counts for the same time window I observe "lookup_editor_controller" having the "action" field while lookup_editor_rest_handler" does not PROBLEM I don't know which sourcetype I should use. QUESTION Which sourcetype should I use?
Hi,   Since the new site I lost my old Splunk answers account. Or when I logged in there was no Splunk answers user linked to it anymore?! How do I get my old user/name back?
Hi, I am currently using a scheduled search (or master search) that uses the Splunk REST API to get a list of specific saved searches and then uses the "map" command to run each one of them.  Each o... See more...
Hi, I am currently using a scheduled search (or master search) that uses the Splunk REST API to get a list of specific saved searches and then uses the "map" command to run each one of them.  Each of the saved searches writes some events into the same index. I am sometimes seeing very strange results (e.g. incomplete data or duplicate data) which leads me to think that my scheduled master search is running into problems with the "map" command. No skipped search warnings are received in the logs, and there is plenty of computing power in the Splunk instance I am using. Does anyone here know of potential issues with the map command? Any other ideas how I can programmatically call saved searches in a sequence?
Issue with Curl Splunk app,  i have curl command not sure how to use it with Splunk curl app or api modular input. Tried lot of combination but didn't work out. Can someone help with this please.... See more...
Issue with Curl Splunk app,  i have curl command not sure how to use it with Splunk curl app or api modular input. Tried lot of combination but didn't work out. Can someone help with this please.     curl -L -X POST 'https://xyz.com/ui/api/token' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -H 'Cookie: abchdgshksjshskks' \ --data-urlencode 'grant_type=client_credentials' \ --data-urlencode 'client_id=ahsgshj-nshgsh-' \ --data-urlencode 'client_secret=shgshgs-jagjs-hshsh'  
Hi All, Post deploying DFS manager on search Head(Master), when I am trying to start splunk, I am seeing below error: “The Spark node failed to initialize" after debug message “Building configur... See more...
Hi All, Post deploying DFS manager on search Head(Master), when I am trying to start splunk, I am seeing below error: “The Spark node failed to initialize" after debug message “Building configuration information”. Also, DFS manager(Application Page) says "Unable to connect Spark master", although Splunk searchhead is working fine. Checked and ports are open 8005-8010, Changed Java_home in splunk_dfs_manager.sh and spark_app.conf as it was complaining about missing Java_home. Attaching snippet of Splunkd log. Please advise.
Hi, I'm trying to bring a grid layout with multiple columns with nested rows in them. I was able to achieve it through HTML with inline style. Below is the one that I've been trying with. Now i w... See more...
Hi, I'm trying to bring a grid layout with multiple columns with nested rows in them. I was able to achieve it through HTML with inline style. Below is the one that I've been trying with. Now i would like to set a token to each box in the grid,so that when i click a box it should drilldown to a new search page .And each box a has different query. <dashboard>  <row> <panel> <html>         <style>             .grid-container {                 display: grid;                 grid-template-columns: auto auto auto auto auto;                 grid-gap: 10px 40px;             }                         .item1 {                 display: grid;                 grid-template-columns: subgrid;                 grid-gap: 10px;             }                         .item1 > div {                 background-color: #F8F8FF;                 text-align: top;                 width: 150px;             }                 .grid-container > div {                 background-color: #228B22;                 padding: 20px 20px;                 font-size: 30px;                 border-radius: 25px;             }         </style>         <h1 style="font-size:40px;text-align:justify;">ABC</h1>         <div class="grid-container">             <div class="item1">                 <h5 style="text-align:center;"> ABCDE </h5>                 <div class="acc" style="background-color:$app1color$">One</div>                 <div class="add" style="background-color:$app2color$">Two</div>         </div> </div>     </html>              <search>     <query> query to evaluate status which outputs color      |fields status     </query>                   <earliest>$earliest$</earliest>                   <latest>$latest$</latest>                   <progress>                     <set token="color1">$result.status$</set>                   </progress>               </search>               <search>                   <query>query to evaluate status which outputs color      |fields status</query>                   <earliest>$earliest$</earliest>                   <latest>$latest$</latest>                   <progress>                     <set token="color2">$result.status$</set>                   </progress>               </search>     </panel>     </row>     </dashboard>
Hi, Anybody knows how to include the windows server backup logs using Splunk_TA_windows addon? I have tried adding the following configuration to local\inputs.conf but it does not seem to work. [Wi... See more...
Hi, Anybody knows how to include the windows server backup logs using Splunk_TA_windows addon? I have tried adding the following configuration to local\inputs.conf but it does not seem to work. [WinEventLog:Microsoft-Windows-Backup/Operational] disabled = 0 index = wineventlog renderXml=false start_from = oldest checkpointInterval = 5 Any suggestions please?
Hi @wazuh community, I'm trying to setup the Wazuh App and I'm facing an issue where I can't save my API settings via the UI. I'm using the latest Wazuh App for our Splunk build, wazuhapp-splunk-3.... See more...
Hi @wazuh community, I'm trying to setup the Wazuh App and I'm facing an issue where I can't save my API settings via the UI. I'm using the latest Wazuh App for our Splunk build, wazuhapp-splunk-3.11.4_8.0.1, and when I try to save the conf I get a 500 error locally when this check is run (hostname and password are omitted): (from web_access.log) /en-US/custom/SplunkAppForWazuh//manager/check_connection?ip=hxxps://X.Y.Z.T&port=55000&user=api_user&pass=********  HTTP/1.1" 500 (from web_service.log) ../../../var/log/splunk/web_service.log:2020-06-12 06:28:43,613 INFO [5ee358db8f7fdbf705c710] error:333 - GET /en-US/custom/SplunkAppForWazuh/manager/check_connection 127.0.0.1 8065 ../../../var/log/splunk/web_service.log: File "&lt;string&gt;", line 351, in check_connection ../../../var/log/splunk/web_service.log: File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2800&gt;", line 2, in check_connection ../../../var/log/splunk/web_service.log: File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2798&gt;", line 2, in check_connection ../../../var/log/splunk/web_service.log: File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2797&gt;", line 2, in check_connection ../../../var/log/splunk/web_service.log: File "&lt;/opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2796&gt;", line 2, in check_connection My splunk instace run as a non-root. I would appreciate any help provided
Hi ,   We are trying to access SH through restapi but getting invalid certificate warning. How to get rid of this message and connect without any warning message. We are already using web and serve... See more...
Hi ,   We are trying to access SH through restapi but getting invalid certificate warning. How to get rid of this message and connect without any warning message. We are already using web and server certificate.
Hello, I'm using SAI to get filesystem usage, unfortunately the mountpoint is not populated, or better it is partial. The following query (KPI in ITSI) | mstats min(df.free) as "df_free" max(df.us... See more...
Hello, I'm using SAI to get filesystem usage, unfortunately the mountpoint is not populated, or better it is partial. The following query (KPI in ITSI) | mstats min(df.free) as "df_free" max(df.used) as "df_used" WHERE `sai_metrics_indexes` by host,mount span=30s | eval host_dev = host . ":" . mount doesn't work because the dimension "mount" is missing. Changing "mount" with "device" I get result without mountpoint. Trying to change the collectd.conf (red part) <Plugin df> FSType "ext2" FSType "ext3" FSType "ext4" FSType "XFS" FSType "rootfs" FSType "overlay" FSType "hfs" FSType "apfs" FSType "zfs" FSType "ufs" MountPoint "/^/.+/" ReportByDevice false ValuesAbsolute false ValuesPercentage true IgnoreSelected false </Plugin> I get data only from one mountpoint (/boot) the other ones are still missing. I don't understand because the mount dimension is missing and only partial mountpoints are taken. Is it my fault ? I use the latest version of SAI, ITSI 4.4.3 and Splunk 8.0.3 Thanks  
Hi, I have a query where I need to join it by a lookup to match the records. This is horribly slow and could be because of join command as it is very expensive.  Is there a way to optimize this sea... See more...
Hi, I have a query where I need to join it by a lookup to match the records. This is horribly slow and could be because of join command as it is very expensive.  Is there a way to optimize this search as I have to run this for last 90 days and it keeps running for ages. My lookup only consist of one column i.e. URL against which I need to match the records and then count them Let me know if someone can advice     index=myapp_pp sourcetype=access_combined GET host="my-server-*" | join requested_conten [| inputlookup vanity.csv | rename url as requested_content] | stats count by requested_content | sort - count    
Dear Team, We have a dataset which having the calculated percentage value as below: what we need to calculate average of Eff_% for every 6 events like,  Event 1-6,2-7,3-8 and 4-9, etc., Here... See more...
Dear Team, We have a dataset which having the calculated percentage value as below: what we need to calculate average of Eff_% for every 6 events like,  Event 1-6,2-7,3-8 and 4-9, etc., Here the code we tried, Please check and suggest: index="*WF*" index!="wf_summary" WindFarm="Amerali" |eval PitchAngle = round(PitchAngle, 0) |eval RotorSpeed = round(RotorSpeed, 1) |eval WindSpeed = round(WindSpeed_10AV, 2) |eval ActivePower = round(PowerKW_10AV, 2) |eval AmbientTemperature = round(AmbientTemperature, 2) |eval GeneratorSpeed_PLCFilter = round(GeneratorSpeed_PLCFilter,2) |eval NacellPosition = round(NacellePosition_10AV,2) |stats values(PitchAngle) as PitchAngle values(RotorSpeed) as RotorSpeed values(WindSpeed) as WindSpeed values(ActivePower) as ActivePower values(AmbientTemperature) as AmbTemp values(GeneratorSpeed_PLCFilter) as GeneratorSpeed values(NacellPosition) as NacellePosition by _time,WindFarm, Turbine | eval Category = case(PitchAngle < 20 AND RotorSpeed >= 9.7, "Run", PitchAngle >= 20 AND RotorSpeed < 9.7, "NotRunning", PitchAngle >= 20 AND RotorSpeed > 9.7, "Transition", PitchAngle < 20 AND RotorSpeed < 9.7, "UnKnown") | search Category = "Run" | lookup Power WS AS WindSpeed OUTPUTNEW EST_Power AS EstimatedPower |eval CM_Status = case(WindSpeed > 0 AND WindSpeed < 10,"0",WindSpeed >= 10 AND WindSpeed <= 12, "1", WindSpeed > 12,"0") |eval Eff_% = (ActivePower/EstimatedPower)*100 |fillnull value=0 Eff_% |fields - PitchAngle, RotorSpeed, AmbTemp, GeneratorSpeed, NacellePosition, Category |sort Turbine |streamstats count(Eff_%) as Ecount |eval alert_New = mvrange(1, Ecount+1, 6) | eventstats avg(Eff_%) as Alert by alert_New |eval Alert = round(Alert,2) |table _time, WindFarm,Turbine, WindSpeed, ActivePower, EstimatedPower, CM_Status, Eff_%, Ecount, Alert, alert_New   Please Suggest.......
Hello community. There were a lot of questions of cases with lookups, but something among them I did not find my answer. There is a table in Lookup - fgt_policy.  The column1- is the policy number... See more...
Hello community. There were a lot of questions of cases with lookups, but something among them I did not find my answer. There is a table in Lookup - fgt_policy.  The column1- is the policy number (field cfgobj in logs) and the column2- policy name. The essence of the search query is that as soon as the policy is changed on the firewall, an allergy is triggered. There is no policy name in the firewall logs themselves, there is only a field with a number, so I created a table where I transferred all the names of our policies. Also, if a new code appears in the policy field (cfgobj), add it to the table fgt_policy. But in the current result, output only known event codes (cfgobj) with the name of the policy Field in the firewall log with policy event code -cfgobj So far it looks like this. The result also includes those codes for which the description in the column2 is not yet.  I will add the name of the policy to the table with my hands when new codes appear in the field cfgobj.
Hi, To centralize a part of our logs with another team, we need to push the result of a splunk query to a graylog instances.  i didn't find a splunk app or splunk feature to do it. do you have an i... See more...
Hi, To centralize a part of our logs with another team, we need to push the result of a splunk query to a graylog instances.  i didn't find a splunk app or splunk feature to do it. do you have an idea ? thanks
Hi everyone, I installed Machine Learning Splunk Tools for testing from Splunkbase, and after trying run ml I got error. What can I do to resolve it? Error in 'fit' command: External search comma... See more...
Hi everyone, I installed Machine Learning Splunk Tools for testing from Splunkbase, and after trying run ml I got error. What can I do to resolve it? Error in 'fit' command: External search command exited unexpectedly with non-zero error code 1. Thanks.
Hi, Below is the information about the environment im working on, Jenkins Version : 2.222.1  Splunk-devops plugin version: 1.9.3 Splunk-devops-extend plugin version: 1.9.3 Pipeline: Stage View P... See more...
Hi, Below is the information about the environment im working on, Jenkins Version : 2.222.1  Splunk-devops plugin version: 1.9.3 Splunk-devops-extend plugin version: 1.9.3 Pipeline: Stage View Plugin  : 2.13 Sometime back  we where able to see "stages{}.children{}.* " information. But now we are only able to see the 'stages' information but not the children information. Im not able to see stages.children info in raw data also.  From jenkins system log I could not find any relevant information except about few re-tries " will resend the message..," Not sure if it is the issue with Jenkins or Splunk plugin. One additional info : Jenkins stage view plug-in supposed to show stage logs in pop-up window when clicked 'log', but it is not showing anything. Not sure if it is the reason. Any help would be more appreciated.