All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The link was just for reference, usually, you need to deploy an app or integration in Service Now first so that the add-on on Splunk integrates with Service Now.
Hello, I'm so please to find this burgeoning community of professionals here. Please I can't do any search whatsoever in my Splunk installation. It is installed locally on a windows 11 machine and ... See more...
Hello, I'm so please to find this burgeoning community of professionals here. Please I can't do any search whatsoever in my Splunk installation. It is installed locally on a windows 11 machine and after a lot of trails and error I had to install again on a second machine and yet the same is the case.  I can search from a pre constructed query f I select d=from there but I can't type a thing myself into the search head. Please I need your help.   
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*... See more...
index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=Subject "PROBLEM - (?<src_host_2>.*) - (?<Service_2>.*) is (?<State_2>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | fields host ,Service,src_host,State,Subject,Additional_Info | lookup hostdata_lookup.csv host as src_host | table src_host,Service,State,_time, cluster, isvm | rename _time as Start_time | search isvm=N AND cluster=*EDGE* | eval Start_time=strftime(Start_time, "%m/%d/%Y - %H:%M:%S") | sort Start_time   index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "RECOVERY" | fields host | transaction host startswith="To:" | search "To: <mail-addr>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=Subject "RECOVERY - (?<src_host_2>.*) - (?<Service_2>.*) is (?<State_2>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | fields host ,Service,src_host,State,Subject,Additional_Info | lookup hostdata_lookup.csv host as src_host | table src_host,Service,State,_time, cluster, isvm | rename _time as End_time | search isvm=N AND cluster=*EDGE* | eval End_time=strftime(End_time, "%m/%d/%Y - %H:%M:%S") | sort End_time   No, recovery has events. As i said, one search will give us "Icinga Problem" and i have another search that will give us "Icinga Recovery". Using join, Icinga Problem Start time and Icinga Recovery End time, if the recovery is more than 15 minutes, need to trigger alert.
Hi @manjunathmeti  thanks for the suggestion but why would I do that.  The link you sent relates to app the "Splunk Add-on for ServiceNow" whereas I am using the app "ServiceNow Security Operation... See more...
Hi @manjunathmeti  thanks for the suggestion but why would I do that.  The link you sent relates to app the "Splunk Add-on for ServiceNow" whereas I am using the app "ServiceNow Security Operations Event Ingestion Addon for Splunk Enterprise" Thanks
Hi @KeithH, Did you configure Service Now to integrate with your Splunk? If not, you can refer to this: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegr... See more...
Hi @KeithH, Did you configure Service Now to integrate with your Splunk? If not, you can refer to this: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegratewithSplunkEnterprise.
hi @Bracha, Try with OUTPUT.  If the OUTPUTNEW clause is specified, the lookup is not performed for events in which the output fields already exist in the events. If the OUTPU T clause is spec... See more...
hi @Bracha, Try with OUTPUT.  If the OUTPUTNEW clause is specified, the lookup is not performed for events in which the output fields already exist in the events. If the OUTPU T clause is specified, the output lookup fields overwrite existing fields in the events.
お世話になります。 SplunkWebからソースタイプを作成する際にCHARSETの項目から、 様々な文字コードを宣言できますが、shift-jis形式の文字コードだけでも SHIFT-JISやSJISなどの複数のパターンが用意されていたと認識しています。 これらの違いについて、説明できる方はいらっしゃいますか?    
Hi,  I have installed the "ServiceNow Security Operations Event Ingestion Addon for Splunk Enterprise" app and configured it using Basic Auth. When I try to send an event I get error:    command="s... See more...
Hi,  I have installed the "ServiceNow Security Operations Event Ingestion Addon for Splunk Enterprise" app and configured it using Basic Auth. When I try to send an event I get error:    command="snsecingest", Unable to forward notable event  after putting some logging in the python I can see the error behind that is  {"error":{"message":"Requested URI does not represent any resource","detail":null},"status":"failure"} Even a simple curl straight to the endpoint fails with the same error. Does anyone know if this endpoint (supplied with the app) might have changed or does it need to be created for each domain? Endpoint I have is: https://XXXXXXdev.service-now.com/api/sn_sec_splunk_v2/event_ingestion   Any suggestions would be appreciated. Thanks
I plan on going from RF1/SF1 to RF2/SF2 at the time. So should I be covered then?
The developer of the app has decided to make the app restricted, so only approved users can download it. If you are a Hashicorp Vault Enterprise user, there are instructions under the Details tab of... See more...
The developer of the app has decided to make the app restricted, so only approved users can download it. If you are a Hashicorp Vault Enterprise user, there are instructions under the Details tab of the splunkbase page with a link to the form for requesting access to the app.  
I'm not aware of an app that can make an editable column in a table which would save to a lookup table. It sounds like a nice idea. Best thing I can suggest is to use a lookup in your search and the... See more...
I'm not aware of an app that can make an editable column in a table which would save to a lookup table. It sounds like a nice idea. Best thing I can suggest is to use a lookup in your search and then near the table you can put a link to the lookup table when viewed with the lookup editor app. This way, users can see the comments in the table, then click on the link to open the lookup editor and make new comments. (assuming the permissions allow it.)
When I view this app (https://splunkbase.splunk.com/app/5093) on Splunkbase it shows that the download is restricted.  Why is that?  I would like to install it on our cloud stack.
I suspect the issue lies with the line: | lookup EventCodes EventCode,LogName OUTPUTNEW desc I assume this is intended to use a lookup definition called EventCodes. Could you try using inputlookup ... See more...
I suspect the issue lies with the line: | lookup EventCodes EventCode,LogName OUTPUTNEW desc I assume this is intended to use a lookup definition called EventCodes. Could you try using inputlookup on EventCodes in a separate search and see which, if any, columns appear? | inputlookup EventCodes If there are no results, then either EventCodes does not exist as a lookup definition or you have no permissions to view it. If there are columns but there are none called "EventCode","LogName" or "dest", then you'll need to adjust those column names in the lookup command.
You can set recipients as a hidden field by prepending '_' to the field name. This will prevent the recipients column from appearing in the table, but the token will still work. | eval _recipients =... See more...
You can set recipients as a hidden field by prepending '_' to the field name. This will prevent the recipients column from appearing in the table, but the token will still work. | eval _recipients = "email1@email.com, email2@email.com" Then use: $result._recipients$ in the "action.email.to =" I would also suggest putting this _recipients eval at the end of your search so it does not accidentally get removed by things like "table". It should also work if you put the eval statement into a macro.
In the CSV file I have id, system, time_range, count_err I received a ready dashboard that monitors the DAGS from the AIRFLOW I am interested in creating for each DAG its own alert with the same lo... See more...
In the CSV file I have id, system, time_range, count_err I received a ready dashboard that monitors the DAGS from the AIRFLOW I am interested in creating for each DAG its own alert with the same logic of the dashboard only with a small change, in the dashboard I mark success if it returned from the AIRFLOW logs success in a time frame I gave the same field in the CSV file and ERROR if it did not return success or returned FAILED, In the alert, I want that if I receive faild as the number of times listed in the CSV file or if it does not return success at the time_range I specified in the CSV file, that it be ERROR The dashboard is taken from the file with the syntax of    [|inputlookup xxx.csv .....] |lookup xxx.csv dag_id OUTPUTNEW system time_range   And I want to add a field   |lookup xxx.csv dag_id OUTPUTNEW system time_range count_err   And I don't know why the extra field is not displayed  
Its working ! Thank you for your quick response.
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conver... See more...
Hi @Roberto.Barnes, If the reply from Manish helped, please click the "Accept as Solution" button to confirm your question has been answered. If you still need help, please reply to keep the conversation going! 
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with ... See more...
I have been trying to achieve "grouped email recipients" and while it is possible, it just won't behave the way I want with generative commands. For "raw events" it works great to have a macro with an eval setting "recipients" to a list of email adresses and then using $result.recipients$ in the "action.email.to =" Howerver, for things like stats and table, this does not work as the actual values of recipients are not part of the results. So for "table" it works if I include "recipients" in the table, but that looks horrible. This can be sort of demonstrated like so where this works: index="_internal" | `recipients` | dedup log_level | table log_level | fields recipients  And this does not index="_internal" | eval recipients = "email1@email.com, email2@email.com" | dedup log_level | table log_level | fields recipients As recipients is empty So, someone suggested that one could use a savedsearches.conf.spec file to define a token like: [savedsearches] recipients = <string> and then use "recipients" in the savedsearches.conf file as $recipients$. This does not seem to be the case though, I cannot find this documented anywhere and the spec file seems to be more "instructive" than anything. Another suggestion was to define global token directly in the savedsearhes file like: [tokens] recipients = Comma-separated list of email addresses and then use $recipients$ for all "action.email.to = $recipients$" in that file. Though I cannot find the token definition solution here documented anywhere. Are any of these suggestions at all valid? Is there any way to somewhere in the app where the alerts live to define a "token" like "recipients" which can be referenced in all "action.email.to" instances in that file so that I only have to update one list in one place? Or is this a "suggested improvement" I need to submit somewhere All the best
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on... See more...
Hello,   I have problem with Linux UFs. I seem it is sending data in batches. The period between batches is about 9 minutes. It means that for oldest messages in batch it creates 9 minutes delay on indexer.  It starts approximatly 21 minutes after its restart. During these 21 minutes is delay constant and low.     All Linux UFs behave in similar way. It start 21 minutes after UF restart, but period is different.    I use UF version are 9.2.0.1 and  9.2.1.    I have checked   - queues state in internal logs, it looks ok - UF truhput is set to 10240   I have independently tested that after restarting the UF the data is coming in with a low and constant delay. After about 21 minutes it stops for about 9 minutes.  After 9 minutes, a batch of messages arrive and are indexed, creating a sawtooth progression in the graph. It doesn't depend on the type of data. It behaves the same for internal UF logs and other logs.    I currently collect data using file monitor input and journald input.   I can't figure out what the problem is.   Thanks in advanced for help   Michal
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Rega... See more...
Hello Gustavo, Yes, by default SaaS controller is SSL enabled so we need to provide secure connection otherwise Clusteragent will fail to connect to the controller. Glad that helped.  Best Regards, Rajesh Ganapavarapu