All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

here is the splunk query i am trying to use, Common field in 2 query is ORDERS   index=source "status for : * " | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request ... See more...
here is the splunk query i am trying to use, Common field in 2 query is ORDERS   index=source "status for : * " | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request for : * | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | table ORDERS UNIQUEID    
Hi Ryan, it is stored in the notifications_config table in the controller database. Yes I got my answer. Thank you!!
Hi @splunklearner , having the logs in Splunk you can check if the hosts are sending logs using a simple search: having a lookup (called e.g. perimeter.csv with at least one column called host) wit... See more...
Hi @splunklearner , having the logs in Splunk you can check if the hosts are sending logs using a simple search: having a lookup (called e.g. perimeter.csv with at least one column called host) with all the hosts that must send logs, you could run something like this: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
Hi @Athira , could you share your two searches? in few words, to correlate events, you need to find a common key, sharing your searches, I could guide you in this. Ciao. Giuseppe
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my... See more...
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  . I am using Join to combine the two log statements on "ORDERS" , but my splunk query not returning any results    
And more quick question, how can we verify whether logs are coming to our syslog server from network devices? If not, how we can troubleshoot and check whether our syslog server is connected to their... See more...
And more quick question, how can we verify whether logs are coming to our syslog server from network devices? If not, how we can troubleshoot and check whether our syslog server is connected to their network device and it's issue from their end? 
@yuanliu Hi, I have made the suggested changes but still _time is not matching with the raw event field (EVENTTS) timestamp. Please suggest me to do the needful.  
The solution to this (like a lot of issues with Studio) is to use Classic SimpleXML dashboards (until Studio catches up with all the missing functionality of Classic)! (Or try and forget what Classic... See more...
The solution to this (like a lot of issues with Studio) is to use Classic SimpleXML dashboards (until Studio catches up with all the missing functionality of Classic)! (Or try and forget what Classic can do and live with the limitations of Studio!)  You could also raise a support case with Splunk identifying the problem so it can be added to the (long) list of outstanding deficiencies!
The transpose doesn't make a huge difference (as @yuanliu suggested), the solution is similar to your previous question, just with a change of field name. index=idx1 source="src1" | table field1 fie... See more...
The transpose doesn't make a huge difference (as @yuanliu suggested), the solution is similar to your previous question, just with a change of field name. index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose 0 header_field=field1 | untable column sysname value_info | eventstats dc(value_info) as distinct_values by column | where distinct_values > 1 | xyseries column sysname value_info
Apologies @ITWhisperer, will remember it! I just tried to create table in the classic dashboard, and here the new line is shown.  
@vinodkumarK Please do not tag / mention me in your posts - I, like many people here, am a volunteer, and, as such I can choose which posts to comment on. I do not appreciate having demands made on m... See more...
@vinodkumarK Please do not tag / mention me in your posts - I, like many people here, am a volunteer, and, as such I can choose which posts to comment on. I do not appreciate having demands made on my time. I tend to prioritise which posts I answer. Given that this is a Dashboard Studio question, my first response would be, does it also happen in Classic / SimpleXML dashboards?
Hi @splunklearner , if you don't use the add-on, you must manually parse the logs and it's a job that I'd avoid! FQDN is an option in server.conf that uses the fqdn, instead of hostname defined in ... See more...
Hi @splunklearner , if you don't use the add-on, you must manually parse the logs and it's a job that I'd avoid! FQDN is an option in server.conf that uses the fqdn, instead of hostname defined in server.conf. It isn't mandatory and I use it only in case of multi tenant environments. for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Serverconf hostnameOption = [ fullyqualifiedname | clustername | shortname ] * The type of information to use to determine how splunkd sets the 'host' value for a Windows Splunk platform instance when you specify an input stanza with 'host = $decideOnStartup'. * Applies only to Windows hosts, and only for input stanzas that use the "host = $decideOnStartup" setting and value. * Valid values are "fullyqualifiedname", "clustername", and "shortname". * The value returned for the 'host' field depends on Windows DNS, NETBIOS, and what the name of the host is. * 'fullyqualifiedname' uses Windows DNS to return the fully qualified host name as the value. * 'clustername' also uses Windows DNS, but sets the value to the domain and machine name. * 'shortname' returns the NETBIOS name of the machine. * Cannot be an empty string. * Default: shortname Ciao. Giuseppe
@PickleRick  so basically there is window server they had U.F which is forwarding data to Splunk. While I checked the _internal logs not able to find anything on search head . What should I do ... See more...
@PickleRick  so basically there is window server they had U.F which is forwarding data to Splunk. While I checked the _internal logs not able to find anything on search head . What should I do next 
Thanks for your answer.. Just my doubt...what is FQDN used for? In my env it is there. Where do we configure this in syslog server or UF or indexer?  I believe add-on you mentioned is not using in ... See more...
Thanks for your answer.. Just my doubt...what is FQDN used for? In my env it is there. Where do we configure this in syslog server or UF or indexer?  I believe add-on you mentioned is not using in our env as of now. Is it seriously recommended to use add-on?
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, a... See more...
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, and message is read continuously. Below is the Splunk app result. Below is the table shown in the Studio. Below is the Splunk Query.   index="eqt-e2e" | spath suite_build_name | search suite_build_name="PAAS-InstantInk-Stage-Regression-Smooth-Transition" | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message!="Test was skipped" | spath suite_build_number | search suite_build_number="*" | where (if("*"="*", 1=1, like(author, "%*%"))) | where (if("*"="*", 1=1, like(message, "%*%"))) | spath suite_build_start_time | sort - suite_build_start_time | eval suite_build_time = strftime(strptime(suite_build_start_time, "%Y-%m-%d %H:%M:%S"), "%I:%M %p") | table suite_build_name, suite_build_number, suite_build_time, author, test_rail_name, message | rename suite_build_name AS "Pipeline Name", suite_build_number AS "Pipeline No.", suite_build_time AS "Pipline StartTime (UTC)", author AS "Test Author", test_rail_name AS "Test Name", message AS "Exception Message"   @ITWhisperer 
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "privat... See more...
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "private_key": "-----BEGIN PRIVATE KEY-----\<MY PRIVATE KEY>\n-----END PRIVATE KEY-----\n", "client_email": "<splunk>@<splunk>-<blabla>, "client_id": "<MY CLIENT ID>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<MY-SPLUNK-URL>" } The service-account was recognized by the Addon so I imagined that the connection was established correctly.  When later I created the first input to collect the logs (a PubSub), when I am searching for the "Projects" connected to this service-account (in the photo) it returns me the error "External handler failed with code '1' and output ''. see splunkd.log for stderr" Actually the stderr in splunkd gives no useful information (just a generic error), so I am blocked at the moment. I also downloaded the code from Google Cloud Platform Add-on but it is not an easy debugging process, I cannot find what is the actual query that the Addon performs when clicking on "Projects".  Someone have some idea on this error?  Thanks
Hi @Vnarunart , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @splunklearner , to better understand how to use Forwarders to getting data in, please read https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Usingforwardingagents or search for related vid... See more...
Hi @splunklearner , to better understand how to use Forwarders to getting data in, please read https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Usingforwardingagents or search for related videos. Anyway, answering to your questions: 1) FQDN is an option that you can add to your server.conf but it isn't mandatory, infact I didn't used in my answer, 2) The add-on is useful to start correct parsing in UFs, you have to install it both on UF and Search Head, yes you can add the stanza in inputs.conf and the idexer address in outputs.conf, but you could have a not correct parsing. To install an add-on on UF you have two choices: copy and untar it in $SPLUNK_HOME/etc/apps on UF, deploy it using the Deployment Server (if you have), for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Updating/Aboutdeploymentserver 4) The IP address I referred is the one that probably you have in the path where the F5 logs are written: in other words, usually rsyslog (or syslog-ng, I don't know what you're using) writes logs in a path and the ip address of the sender is a part of this path, this informatin is useful to associate the correst host to your logs. Ciao. Giuseppe
I appreciate your advice.
1. What is FQDN? For what it is used? Where we need to give this?  2. Why to install add-on in UF? Do UF don't forward these logs to our indexer by giving monitor stanza in inputs.conf and indexer I... See more...
1. What is FQDN? For what it is used? Where we need to give this?  2. Why to install add-on in UF? Do UF don't forward these logs to our indexer by giving monitor stanza in inputs.conf and indexer IP address in outputs.conf in UF?  3. How can we install add-on in UF (lighter package can't open in UI)?  4. Which IP address are you referring to? Syslog server or UF?