All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am very new to Splunk. I searched for this but, could not find a match.. Is it possible to find what system or host a user is currently logged in to?    
Hi Guys, We have this query which will give the output as a table with 3 columns in it by name Servername, ServerIP and ServerLocation.   | inputlookup WindowsTag.csv   Now we are trying ... See more...
Hi Guys, We have this query which will give the output as a table with 3 columns in it by name Servername, ServerIP and ServerLocation.   | inputlookup WindowsTag.csv   Now we are trying to add the Search input using a token , so modified the query as below;   | inputlookup WindowsTag.csv | search Servername=$mytoken$   But here problem is when we search a text it will search only in column “Servername”, but we want to search for that text in all the available columns (Servername, ServerIP and ServerLocation) in that table. Can we do that? We tried tweaking the query in many ways but no luck.   Can someone please guide us how to do this?
Hi. I am trying to create real-time alerts using splunk REST API by using https://localhost:8089/services/saved/searches?output_mode=json POST API with the following parameters :- alert_type = ... See more...
Hi. I am trying to create real-time alerts using splunk REST API by using https://localhost:8089/services/saved/searches?output_mode=json POST API with the following parameters :- alert_type = always is_scheduled = 1 cron_schedule = * * * * * alert_comparator = greater than alert_threshold = 0 search = index=* name = Demo-alert-test actions = webhook action.webhook.param.url = my-webhook-url allow_skew = 0  With the help of these parameters, I am only generating alerts with cron scheduling. Is there any way to create alerts for real time scheduling. Need Good suggestions. Thanks!
index=dummy <mySearchCondition>| search response_code1!=200| stats count when i search for this query i get output as 0 in count column. but when i try this query: index=dummy <mysearchCondition> |... See more...
index=dummy <mySearchCondition>| search response_code1!=200| stats count when i search for this query i get output as 0 in count column. but when i try this query: index=dummy <mysearchCondition> | bin _time span=1d |eval Time=strftime(_time , "%d/%m/%Y %H:%M")| search response_code1!=200| stats count by Time expected ans for this: Time count 2021-04-20 04:36 0   i'm not able to see any output. what to do?
Hello Guys, Am having with hadoop logs that is not properly parsed when I use the sourcetype:linux_secure or access_combined. I have gone through the splunk documentation and hadoop docs to see if t... See more...
Hello Guys, Am having with hadoop logs that is not properly parsed when I use the sourcetype:linux_secure or access_combined. I have gone through the splunk documentation and hadoop docs to see if there is a way for me to parse the logs properly but not seeing anything of help. I would be really glad if someone can point me in the right direction. Sample Logs below: ... 10 more at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:326) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) [hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:649) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1855) ~[hadoop-common-3.0.0-cdh6.3.0.jar:?] at javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_181] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:649) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:652) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream 2021-04-20 12:34:21,906 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-56]: Error occurred during processing of message. ... 10 more at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:326) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) [hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:649) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1855) ~[hadoop-common-3.0.0-cdh6.3.0.jar:?] at javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_181] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:649) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:652) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[hive-exec-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0] java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream 2021-04-20 12:34:21,905 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-56]: Error occurred during processing of message. 2021-04-20 12:34:21,887 INFO org.apache.hadoop.hive.ql.session.SessionState: [a11894f3-4ce4-478a-a11a-8fa624429f33 HiveServer2-Handler-Pool: Thread-6527170]: Resetting thread name to HiveServer2-Handler-Pool: Thread-6527170 2021-04-20 12:34:21,887 INFO org.apache.hadoop.hive.conf.HiveConf: [a11894f3-4ce4-478a-a11a-8fa624429f33 HiveServer2-Handler-Pool: Thread-6527170]: Using the default value passed in for log id: a11894f3-4ce4-478a-a11a-8fa624429f33 2021-04-20 12:34:21,884 INFO org.apache.hadoop.hive.ql.session.SessionState: [HiveServer2-Handler-Pool: Thread-6527170]: Updating thread name to a11894f3-4ce4-478a-a11a-8fa624429f33 HiveServer2-Handler-Pool: Thread-6527170 2021-04-20 12:34:21,884 INFO org.apache.hadoop.hive.conf.HiveConf: [HiveServer2-Handler-Pool: Thread-6527170]: Using the default value passed in for log id: a11894f3-4ce4-478a-a11a-8fa624429f33 ... 10 more
New to this so probably a very basic question.... A user has a query that comes out with a nicely formatted statistics tab when he runs it. When another user runs the same query the statistics are ... See more...
New to this so probably a very basic question.... A user has a query that comes out with a nicely formatted statistics tab when he runs it. When another user runs the same query the statistics are blank - It appears that the last item is what generates the stats - how do we make this global or share it with the other users ?   index=* ERROR host IN(AEMpub01,AEMpub02,AEMpub03, AEMpub04) | top limit=100 AEMError  
Hi, Recently we upgraded our splunk instance to 8.1 and after the upgrade we aren't seeing App name in the application menu. Did anyone faced similar issue? 
Hello Experts, I am new to Splunk and trying to get a search query with subsearch to work. Here is what I have so far: index=palantir_audit host="merlin.palantir.abc.ncc" sourcetype=_json | search... See more...
Hello Experts, I am new to Splunk and trying to get a search query with subsearch to work. Here is what I have so far: index=palantir_audit host="merlin.palantir.abc.ncc" sourcetype=_json | search "DOS CCD" | search "requestParams.primaryInputs{}.type"=SEARCH_TERMS name=SEARCH | spath output=search_values path=requestParams.primaryInputs{0}.values{0} | spath output=data_sources path=resultParams.additionalContent{}.resources{}.title | table time data_sources search_values The above returns two results at runtime with "DOS CCD" as one or more of values in the data_sources field and i also have a "time" field (doesn't appear to be a reserved word) and a search_values field I want to replace the second line of the main search with a subsearch using the below. The .csv lookup file has three columns of which I am returning "DataSource" |inputlookup Palantir_T3_Collection_Lookup_JSON.csv |rename DataSource as data_sources |table data_sources This runs fine and gets the value "DOS CCD" from the lookup file with no problem, but when I try and pass this result into the main search like this I get no results: index=palantir_audit host="merlin.palantir.abc.ncc" sourcetype=_json [|inputlookup Palantir_T3_Collection_Lookup_JSON.csv |rename DataSource as data_sources |table data_sources] | search "requestParams.primaryInputs{}.type"=SEARCH_TERMS name=SEARCH | spath output=search_values path=requestParams.primaryInputs{0}.values{0} | spath output=data_sources path=resultParams.additionalContent{}.resources{}.title | table time data_sources search_values Any help would be greatly appreciated. Thanks!
Hello Splunk Experts, I have an issue with measuring the CPU load in a Linux box.  With the below query, I am getting a high CPU usage when there were no activities running on Linux Server. Actual... See more...
Hello Splunk Experts, I have an issue with measuring the CPU load in a Linux box.  With the below query, I am getting a high CPU usage when there were no activities running on Linux Server. Actually, the server status is pretty much an Idea most of the time and it is being used as a backup server. cpu_load = 100 - PercentIdleTime;    eval cpu_load = 100 - PercentIdleTime | stats avg(cpu_load) as "CPUUsage" by host | eval "CPUUsage"=round('CPUUsage', 2) | where CPUUsage>90    
Hi, Can someone please help me with this? The percentage of non high priority searches skipped (73%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instan... See more...
Hi, Can someone please help me with this? The percentage of non high priority searches skipped (73%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=759. Total skipped Searches=559 Regrads, Rahul
I have an index that have a field called ISSUER_NAME, but now we have a new set of events (different log structure) that don't have that ISSUER_NAME but identify by another field. I want to add ISSUE... See more...
I have an index that have a field called ISSUER_NAME, but now we have a new set of events (different log structure) that don't have that ISSUER_NAME but identify by another field. I want to add ISSUER_NAME to that specific event. I tried   eval ISSUER_NAME = if(TXN="33","XXX",ISSUER_NAME)   In which, TXN="33" is the identifier to those events, XXX is the content of the ISSUER_NAME I want to give to said events, which they don't have. But if I do that, the ISSUER_NAME of old events structure return NULL. I want to ask, is there a way to add field to events without altering existing field? Thank you.
  Hi According to the document here, cluster master distributes an app under indexer clustering environment. https://docs.splunk.com/Documentation/Splunk/8.1.3/Indexer/Manageappdeployment I think... See more...
  Hi According to the document here, cluster master distributes an app under indexer clustering environment. https://docs.splunk.com/Documentation/Splunk/8.1.3/Indexer/Manageappdeployment I think if the app dose not have index-time extraction configuration, cluster master may distribute only indexes.conf. Why dose cluster master have to distribute whole app? I believe the app should be located only on search head. Because when search runs, search-time extraction conf (props.conf and transforms.conf) would be down to peers with bundle replication.  Could you please let me know why the docs says so? Is it best practice?
Hello, i have a search head (deployment server) with ES and a distributed environment with suricata eve.json monitoring from the remote indexers. I would like to bring CIM compatible data in ES. TA... See more...
Hello, i have a search head (deployment server) with ES and a distributed environment with suricata eve.json monitoring from the remote indexers. I would like to bring CIM compatible data in ES. TA-suricata is installed using the deployment server to all clients. Suricata data are properly transformed. Checked data in remote indexer and are properly CIM parsed. Installed TA-suricata in search head, but with the same search, I don't see the same data output. What shall I configure on inputs.conf? I have in both SH and Indexers the following inputs.conf configuration: [monitor]:///var/log/suricata/eve.json disabled = false index = suricata sourcetype = suricata source = /var/log/suricata/eve.json Any help would be appreciated. Thank you in advance, Chris
Hi, I created a filter with all the months taken from the "SCOPE" field. this contains 12 months back, in the dashboard I can have all the months written in Italian. Now I would like the default t... See more...
Hi, I created a filter with all the months taken from the "SCOPE" field. this contains 12 months back, in the dashboard I can have all the months written in Italian. Now I would like the default to be the month prior to today. I tried with <default> $ A_data $ </default> <choice value = "$ A_data $"> PREVIOUS_MONTH </choice> but it does not work. I would like it to be selected by Default (if today we are in April, "MARCH-2021" was selected) Tks Bye Antonio ------ source ------- <form hideFilters="true">   <label> Results and Details: Web survey</label>   <fieldset submitButton="false">     <input type="dropdown" token="field1">       <label>Mese</label>       <fieldForLabel>field1</fieldForLabel>       <fieldForValue>AMBITO</fieldForValue>       <search>         <query>index=aladin * sourcetype=cruscotto_tnps |search ZONA ="Totale" |eval anno=mvindex(split(AMBITO,"-"),1) |eval mese=mvindex(split(AMBITO,"-"),0) | eval mesi=case( mese="Gennaio","01", mese="Febbraio","02", mese="Marzo","03", mese="Aprile","04", mese="Maggio","05", mese="Giugno","06", mese="Luglio","07", mese="Agosto","08", mese="Settembre","09", mese="Ottobre","10", mese="Novembre","11", mese="Dicembre","12", 1=1, "INV") |eval OrdineAmbito = anno.mesi |sort OrdineAmbito | eval A_mese=strftime(relative_time(now(), "-30d@d"),"%m") | eval A_anno=strftime(relative_time(now(), "-30d@d"),"%Y") | eval A_mesi=case( A_mese="01","Gennaio", A_mese="02","Febbraio", A_mese="03","Marzo", A_mese="04","Aprile", A_mese="05","Maggio", A_mese="06","Giugno", A_mese="07","Luglio", A_mese="08","Agosto", A_mese="09","Settembre", A_mese="10","Ottobre", A_mese="11","Novembre", A_mese="12","Dicembre", 1=1, "INV") |eval A_data= A_mesi."-".A_anno   |table AMBITO,A_data</query>         <earliest>@d</earliest>         <latest>now</latest>       </search>       <default>$A_data$</default>       <choice value="$A_data$">MESE_PRECEDENTE</choice>     </input>   </fieldset>   <row>     <panel>       <html>         <div style="height: 2em;   display: flex;   align-items: center;   justify-content: center;color:blue;font-style:bold;font-size:200%">          <B> $field1$</B>         </div>         </html>     </panel>   </row>   <row>     <panel>       <html>      </html>     </panel>   </row>   <row>     <panel>       <single>         <title>T-PRO</title>         <search>           <query>index=aladin * sourcetype=cruscotto_tnps |search ZONA ="Totale" |eval anno = strftime(_time,"%Y") | eval mesi=strftime(_time,"%m") | eval mese=case( mesi="01","Gennaio-", mesi="02","Febbraio-", mesi="03","Marzo-", mesi="04","Aprile-", mesi="05","Maggio-", mesi="06","giugno-", mesi="07","Luglio-", mesi="08","Agosto-", mesi="09","Settembre-", mesi="10","Ottobre-", mesi="11","Novembre", mesi="12","Dicembre-", 1=1, "INV") |eval meseanno= mese.anno |where AMBITO = "$field1$" |sort ZONA   |fields T_PRO |stats values(T_PRO)</query>           <earliest>@d</earliest>           <latest>now</latest>         </search>         <option name="drilldown">none</option>         <option name="height">50</option>         <option name="rangeColors">["0x006d9c","0x53a051"]</option>         <option name="rangeValues">[500]</option>         <option name="refresh.display">none</option>         <option name="useColors">1</option>       </single>     </panel>
Hi, I want to raise tickets in ServiceNow using the HTTP request template in AppDynamics. Do we need any additional licenses on any of the platforms? Thank you.
Hi, I've got a couple of splunk endpoints, a HTTP endpoint and a raw endpoint used by Kinesis. We recently noticed one of the endpoints died recently as the Kinesis used SSL.  Proactive monitoring... See more...
Hi, I've got a couple of splunk endpoints, a HTTP endpoint and a raw endpoint used by Kinesis. We recently noticed one of the endpoints died recently as the Kinesis used SSL.  Proactive monitoring for SSL certs aside, I'd like to poll the splunk endpoints to confirm they're working. I've tried posting blank data as suggested elsewhere but that throws a 400 error which isn't helpful. Is there something I can do to monitor these and return a 200 status code with some string I can check for, without ingesting status check data to splunk at 1 min intervals.   cheers, Boffhead
  I have a Splunk cloud and trying to find where to configure S3 Bucket info for smart store?
Hi Team can you please help in extracting the  123456 from following string hello world  \"employee\":123456  
Hi All,  I am trying to replace gentimes from my query due to slowness. I have read that if I add the field to an automatic lookup that can help me replace the gentimes. The lookup has already been ... See more...
Hi All,  I am trying to replace gentimes from my query due to slowness. I have read that if I add the field to an automatic lookup that can help me replace the gentimes. The lookup has already been defined and created already, but I still cant seem to add the field to the lookup...  Any thoughts on how I can add the field to the automatic lookup ? Info: This is the lookup I need to associate the app name to: lkp_rankedReviews This is the app name I need to associate the lookup to: app=xyz I have tried in the automatic lookup: *Lookup input fields: lkp_rankedReviews = app  also tried: Lookup output fields: lkp_rankedReviews = app