All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i would like to get below values from splunk into shell script . i am creating alert for below values and using webhook to invoke a shell script.  i am using below webhooklink to trigger the scr... See more...
i would like to get below values from splunk into shell script . i am creating alert for below values and using webhook to invoke a shell script.  i am using below webhooklink to trigger the script  but i don't know how to get those splunk search results into shell script? can someone help to suggest me which command/code has to used to capture the value form splunk ?
I am getting this data when I am pulling events from a sourcetype Name=Microsoft Hyper-V Network Adapter _2 Now I want to show this in a table, but when I am using --> table Name then it is showing... See more...
I am getting this data when I am pulling events from a sourcetype Name=Microsoft Hyper-V Network Adapter _2 Now I want to show this in a table, but when I am using --> table Name then it is showing only Microsoft i.e. only the first word is being shown. How can I show the whole value of the name field? Please help.  
I have event data from the search result in format as shown in the image, now I want to extract the following fields with their corresponding values excluding the remaining fields or data from the ev... See more...
I have event data from the search result in format as shown in the image, now I want to extract the following fields with their corresponding values excluding the remaining fields or data from the event data/string: id = b0ad6627-a6e1-4f5e-92f4-9c2deaa1ff2a_1cd4b06f83caac09 start_date_time = 1638433382 (value always required) end_date_time = null or 1638433491  (if value not present) current = <value> (only if the field exist) (6 in the example) total = <value> (6 in the example) status_type = COMPLETED bot_uri = repository:///Automation%20Anywhere/Bots/Test%20A2019/AALogTestBot I tried using <search query> | rex field=_raw "(?msi)(?<ev_field>\{.+\}$)" | spath input=ev_field  to extract all the fields in the Event data, but did not change the search results. Any suggestion or help highly appreciated I am newbie to Splunk... TIA   12/2/21 7:24:52.106 PM   2021-Dec-02 Thu 19:24:52.106 INFO [pool-12-thread-1] - com.automationanywhere.nodemanager.service.impl.NodeMessagingServiceImpl - {} - writeSuccess(NodeMessagingServiceImpl.java:395) - Message eventData { id: "b0ad6627-a6e1-4f5e-92f4-9c2deaa1ff2a_1cd4b06f83caac09" bot_execution { start_date_time { seconds: 1638433382 nanos: 210329300 } end_date_time { seconds: 1638433491 nanos: 993822800 } progress { current: 6 total: 6 percentage: 100 } status_type: COMPLETED bot_uri: "repository:///Automation%20Anywhere/Bots/Test%20A2019/AALogTestBot?fileId=1098948&workspace=PRIVATE" }} sent to CR successfully.  
Hi Team, i have created my account on support portal with my official email address followed by all details .. but i don't remember the user name and password which i have given at the time of user... See more...
Hi Team, i have created my account on support portal with my official email address followed by all details .. but i don't remember the user name and password which i have given at the time of user creation . now i am unable to create the new one since it already  have some account with that email id .. seems we have to delete that account associated with that email Id .. how can we delete that account associated with my offical email_id so that i can create the new one ..  
Hi all. I am ingesting a CSV file from a UF where the CSV is daily updated by the app team at a particular time and  I have seen data in splunk ingesting on hourly basis which should not be the case ... See more...
Hi all. I am ingesting a CSV file from a UF where the CSV is daily updated by the app team at a particular time and  I have seen data in splunk ingesting on hourly basis which should not be the case and it should be ingesting only once per day need suggestions on how to eliminate this and make sure the data is ingesting only once per day        
Hello all,   I am trying to extract a field from the below event and the extraction is missing the last part of the field. Please help in getting this extracted. Event: 117691777,00004105,0000000... See more...
Hello all,   I am trying to extract a field from the below event and the extraction is missing the last part of the field. Please help in getting this extracted. Event: 117691777,00004105,00000000,5064,"20211202100006","20211202100006",4,-1,-1,"SYSTEM","","IPSC002",94882466,"MS932","Server-I ジョブ(Server:/IZ_SSYS_DB/DAILY/MP7/MP_D41/物流ルートテーブルデータ送信:@20H7984)を開始します(host: Host, JOBID: 229589)","Information","tdi01","/HITACHI/JP1/AJS2","JOB","AJSROOT1:/IZ_SSYS_DB/DAILY/MP7/MP_D41/物流ルートテーブルデータ送信","JOBNET","Server:/IZ_SSYS_DB/DAILY/MP7","Server:/IZ_SSYS_DB/DAILY/MP7/MP_D41/物流ルートテーブルデータ送信","START","20211202100006","","",16,"A0","Server:/IZ_SSYS_DB/DAILY","A1","MP7","A2","MP_D41/物流ルートテーブルデータ送信","A3","@20H7984","ACTION_VERSION","0600","B0","n","B1","2","B2","tdi01","B3","IPSC002","C0","IPSC202","C1","","C6","r","H2","188677","H3","pj","H4","q","PLATFORM","NT",   Extraction used: (?:[^,]+,){14}(?<alert_description>[^,]+),   However the same extraction is working on the below event as expected. 117727680,00004103,00000000,5064,"20211202172828","20211202172828",4,-1,-1,"SYSTEM","","IPSC002",94918000,"MS932","Server-I ジョブネット(Server:/HTHACHU/IJH03/IJH03:@20I8438)が正常終了しました","Information","tdi01","/HITACHI/JP1/AJS2","JOBNET","AJSROOT1:/HTHACHU/IJH03/IJH03","JOBNET","AJSROOT1:/HTHACHU/IJH03/IJH03","AJSROOT1:/HTHACHU/IJH03/IJH03","END","20211202172827","20211202172828","",10,"A0","AJSROOT1:/HTHACHU/IJH03","A1","IJH03","A3","@20I8438","ACTION_VERSION","0600","B0","n","B1","0","B3","IPSC002","H2","853876","H3","n","PLATFORM","NT", Please help extract the highlighted field.
Hello Everyone, This is a general question that I haven't found an answer to yet. I am aware of how a license violation is carried out.(https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/Abo... See more...
Hello Everyone, This is a general question that I haven't found an answer to yet. I am aware of how a license violation is carried out.(https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/Aboutlicenseviolations#:~:text=What%20is%20a%20license%20warning,clock%20on%20the%20license%20master.) In a training course it was mentioned that license warnings are reported to Splunk, is that correct? What I would like to know is the following: Does Splunk receives anything? Does Splunk receives messages at license warnings or violations? Does Splunk receives messages, when a license pool goes in warning or violation? What does Splunk receives? Data Volume? License ID Thank you all for you help.
Hi I retrieve the fields of a dropdown list from an CSV file It works but the probleme I have is that randomnly I have the message "filling on going" which last and as a consequence I am unable to ... See more...
Hi I retrieve the fields of a dropdown list from an CSV file It works but the probleme I have is that randomnly I have the message "filling on going" which last and as a consequence I am unable to update the dashboards panels because each panels are referenced to this dropdown list there is 1300 lines in my csv file What is the problem please? <input type="dropdown" token="site" searchWhenChanged="true"> <label>Site</label> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search> <query>| inputlookup site.csv</query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input>   
Hello, I have some issues extracting fields from the following raw event. I should be getting following fileds from this event. Any help will be highly appreciated. Thank you! Field Names: TIMESTA... See more...
Hello, I have some issues extracting fields from the following raw event. I should be getting following fileds from this event. Any help will be highly appreciated. Thank you! Field Names: TIMESTAMP, USERTYPE, USERID, SYSTEM, EVENTTYPE, EVENTID, SRCADDR, SESSIONID, TAXPERIOD, RETURNCODE, TAXFILERTIN, VARDATA Sample Event: {"log":"\u001b[0m\u001b[0m05:14:09,516 INFO  [stdout] (default task-4193) 2021-12-02 05:14:09,516 INFO  [tltest.logging.TltestEventWriter] \u003cMODTRANSAUDTRL\u003e\u003cEVENTID\u003e1210VIEW\u003c/EVENTID\u003e\u003cEVENTTYPE\u003eDATA_INTERACTION\u003c/EVENTTYPE\u003e\u003cSRCADDR\u003e192.131.8.1\u003c/SRCADDR\u003e\u003cRETURNCODE\u003e00\u003c/RETURNCODE\u003e\u003cSESSIONID\u003etfYU4-AEPnEzZg\u003c/SESSIONID\u003e\u003cSYSTEM\u003eTLCATS\u003c/SYSTEM\u003e\u003cTIMESTAMP\u003e20211202051409\u003c/TIMESTAMP\u003e\u003cUSERID\u003eAX3BLNB\u003c/USERID\u003e\u003cUSERTYPE\u003eAdmin\u003c/USERTYPE\u003e\u003cVARDATA\u003eCASE NUMBER, CASE NAME;052014011348000,BANTAM LLC\u003c/VARDATA\u003e\u003c/MODTRANSAUDTRL\u003e\n","stream":"stdout","time":"2021-12-02T05:14:09.517228451Z"}
Hi Everyone , I have two applications and I have created dashboards forteh apps: index=epaas_epaas2_idx ns=blazegateway app_name=blazecrsgateway* I need to get the below info: Total YTD Volume f... See more...
Hi Everyone , I have two applications and I have created dashboards forteh apps: index=epaas_epaas2_idx ns=blazegateway app_name=blazecrsgateway* I need to get the below info: Total YTD Volume for PSF Push API Total Volume to GRS YTD Can someone guide me how we can get the above two information with index,ns and app name.
Hi  Currently, My scheduled alert runs every five minutes but I need to get it triggered when the event count goes more than 2 in a minute. What is the best way to handle it? 
Hi There,  I am probably making this more confusing for myself than it needs to be, but its a simple concept.  Here is the scenario. If an invite is emailed and no confirmation is received within 1 ... See more...
Hi There,  I am probably making this more confusing for myself than it needs to be, but its a simple concept.  Here is the scenario. If an invite is emailed and no confirmation is received within 1 day from email being sent then it is "In Progress" otherwise its a failure.  Please help formulate, basically if no confirmation is received within 1 day its in progress. I would like to keep my times all in epoch. Thank You in advance  | makeresults | eval email_sent=1637978619.056000 | eval time_passed_no_confirmation=86400 | eval confirmation_remains_null="null"
On various occasions I find myself writing formulas like (simplified version): eval cat=case(like(CC, "TenantA%"), "ABC", like(CC, "TenantB%"), "BBC", true(), "Enterprise") Or mapping the hosts to... See more...
On various occasions I find myself writing formulas like (simplified version): eval cat=case(like(CC, "TenantA%"), "ABC", like(CC, "TenantB%"), "BBC", true(), "Enterprise") Or mapping the hosts to regions eval site=case(like(host, "%-au%"), "AWS US", like(host, "%-ac%"), "AWS CA", like(host, "%-ae%"), "AWS EU", true(), "UnKnown")   Sometime I use the same mappings across many reports dashboards. Copy/paste does not cut it. Also sometimes they need to get updated. Any suggestion?
hi there! We have a daly push from Google over to our Splunk instance that provides directory information around total number of users, etc. I have a very simple query today that can parse out the ... See more...
hi there! We have a daly push from Google over to our Splunk instance that provides directory information around total number of users, etc. I have a very simple query today that can parse out the information I need into two values to cover the last directory push into Splunk: index="google" sourcetype="*directory*" "emails{}.address"="*@mydomain.com" | chart count by archived This results in two values being returned from the latest directory push. My question is this:  I would like to have this embedded in a dashboard so that we can show historical values of this data in a bar chart over time:  ie:  how is the directory growing / shrinking week-to -week or month-to-month. I am not sure if I should head down the timechart path, or use a different method to get this data based on the fact it is a periodical (every 24 hours) single entry pushed into the splunk server...thoughts on which path to start down would be super helpful....
I am attempting to use a search from IT Essentials Learn named "Alert when host stops reporting data - Linux - IT Essentials Work" Is it possible to filter this alert by host type? I've performed a ... See more...
I am attempting to use a search from IT Essentials Learn named "Alert when host stops reporting data - Linux - IT Essentials Work" Is it possible to filter this alert by host type? I've performed a number of tests now and it seems my only option is to search against all hosts. Here is the search from IT Essentials Learn       |tstats dc(host) as val max(_time) as _time where index="<INDEXES-TO-CHECK>" host="<HOSTS-TO-CHECK>" by host |append [|metadata type=hosts index="<INDEXES-TO-CHECK>" | table host lastTime | rename lastTime as _time | where _time>now()-(60*60*12) | eval val=0] |stats max(val) as val max(_time) as _time by host | where val=0 | rename val as "Has Data" | eval Missing Duration= tostring(now()-_time, "duration") | table host "Has Data" "Missing Duration"         I modified the 2 index lines and the host line. If I use * for all 3 it kind of works but checks against every host. If I use host=*dev* it displays all hosts without the name *dev* as evaluating to 0 whereas all the *dev* hosts get evaluated to 1.  To counteract this I tried adding a where host=*dev* elsewhere (I tried it in the metadata portion, as a where clause at the end of all the metadata piping, as a where clause next to where val=0, etc.) but this has the effect of just completely removing a host that isn't sending data from the list (or removing all hosts), so that also does not work. Is it possible to split this up based on hosts or am I stuck with an all or nothing?   Edit: I tried adding a where all the way at the end. It does not work with host="*dev*" However, I can use host!="some host name" to filter those out. I'm not sure why I can use negation but not wildcards?   Edit2: I am searching on the prior 5mins if that matters at all
Hello   I am running a * search in an app and it returns several columns in the csv extract where a column is named 'source'.   I want to return the distinct values of 'source' but neither of the... See more...
Hello   I am running a * search in an app and it returns several columns in the csv extract where a column is named 'source'.   I want to return the distinct values of 'source' but neither of the below work: | values(source) or | distinct source   Any idea?   thanks!
  I have 2 independent queries run on 2 different index that give me a list of requestIds. I want to filter/not include the requestIds of the second query in my search. I am trying to use the follow... See more...
  I have 2 independent queries run on 2 different index that give me a list of requestIds. I want to filter/not include the requestIds of the second query in my search. I am trying to use the following query to do so but its not filtering the results from second query. What am i doing wrong here    index="index1" <query1> | rename requestId AS Result | table Result | search NOT [search index="index2" <query2>| rename RequestId AS Result| table Result]
We have an application, that sends all its log-messages to Splunk (so far so good), and an alert configured to fire, whenever a message with severity above INFO-level is logged. This works Ok most o... See more...
We have an application, that sends all its log-messages to Splunk (so far so good), and an alert configured to fire, whenever a message with severity above INFO-level is logged. This works Ok most of the time, except when the application restarts there are multiple such warnings and errors logged by some of its threads. We don't care for these, because the main thread has already announced, that it is shutting down. How can I phrase the search underlying our alert to exclude any log-entries made after the "I am shutting down" and before the "I started up" ones? To clarify: we want Splunk to receive all the log-entries, we just don't want the alert to be triggered by those, that are emitted during the program restart...
I need help regarding a join from events based on two different indexes that are related by the same value in one specific field. Below a simple example: index=source1 | table device.hostname,devic... See more...
I need help regarding a join from events based on two different indexes that are related by the same value in one specific field. Below a simple example: index=source1 | table device.hostname,device.serialnumber Results: device.hostname device.serialnumber host1 ABC host2 DEF index=source2 | table hostname,user Results: hostname user host1 john host2 mary I would like to join these two searches in order to get the following results: device.hostname device.serialnumber user host1 ABC john host2 DEF mary Thank in advance for your help.
I believe there is a latent bug in the aws_config_cli.py script for the AWS Add-on. The function list is from the latest version (5.2.0)     def list(self): names = None if sel... See more...
I believe there is a latent bug in the aws_config_cli.py script for the AWS Add-on. The function list is from the latest version (5.2.0)     def list(self): names = None if self.params.names: names = self.params.names.split(',') results = self.config_mgr.list( self.endpoint, self.params.hostname, names) items = [] for result in results: item = copy.deepcopy(result['content']) item['name'] = result['name'] items.append(item) print json.dumps(items, indent=2)     Notice the print command.  In Python 3, print is a function and thus requires braces.  For Python3, I believe the code should actually be print (json.dumps(items, indent=2)) Additionally,in compose_cli_args for resource, desc in resources.iteritems(): should be for resource, desc in resources.items():