All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The solution to this (like a lot of issues with Studio) is to use Classic SimpleXML dashboards (until Studio catches up with all the missing functionality of Classic)! (Or try and forget what Classic... See more...
The solution to this (like a lot of issues with Studio) is to use Classic SimpleXML dashboards (until Studio catches up with all the missing functionality of Classic)! (Or try and forget what Classic can do and live with the limitations of Studio!)  You could also raise a support case with Splunk identifying the problem so it can be added to the (long) list of outstanding deficiencies!
The transpose doesn't make a huge difference (as @yuanliu suggested), the solution is similar to your previous question, just with a change of field name. index=idx1 source="src1" | table field1 fie... See more...
The transpose doesn't make a huge difference (as @yuanliu suggested), the solution is similar to your previous question, just with a change of field name. index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose 0 header_field=field1 | untable column sysname value_info | eventstats dc(value_info) as distinct_values by column | where distinct_values > 1 | xyseries column sysname value_info
Apologies @ITWhisperer, will remember it! I just tried to create table in the classic dashboard, and here the new line is shown.  
@vinodkumarK Please do not tag / mention me in your posts - I, like many people here, am a volunteer, and, as such I can choose which posts to comment on. I do not appreciate having demands made on m... See more...
@vinodkumarK Please do not tag / mention me in your posts - I, like many people here, am a volunteer, and, as such I can choose which posts to comment on. I do not appreciate having demands made on my time. I tend to prioritise which posts I answer. Given that this is a Dashboard Studio question, my first response would be, does it also happen in Classic / SimpleXML dashboards?
Hi @splunklearner , if you don't use the add-on, you must manually parse the logs and it's a job that I'd avoid! FQDN is an option in server.conf that uses the fqdn, instead of hostname defined in ... See more...
Hi @splunklearner , if you don't use the add-on, you must manually parse the logs and it's a job that I'd avoid! FQDN is an option in server.conf that uses the fqdn, instead of hostname defined in server.conf. It isn't mandatory and I use it only in case of multi tenant environments. for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Serverconf hostnameOption = [ fullyqualifiedname | clustername | shortname ] * The type of information to use to determine how splunkd sets the 'host' value for a Windows Splunk platform instance when you specify an input stanza with 'host = $decideOnStartup'. * Applies only to Windows hosts, and only for input stanzas that use the "host = $decideOnStartup" setting and value. * Valid values are "fullyqualifiedname", "clustername", and "shortname". * The value returned for the 'host' field depends on Windows DNS, NETBIOS, and what the name of the host is. * 'fullyqualifiedname' uses Windows DNS to return the fully qualified host name as the value. * 'clustername' also uses Windows DNS, but sets the value to the domain and machine name. * 'shortname' returns the NETBIOS name of the machine. * Cannot be an empty string. * Default: shortname Ciao. Giuseppe
@PickleRick  so basically there is window server they had U.F which is forwarding data to Splunk. While I checked the _internal logs not able to find anything on search head . What should I do ... See more...
@PickleRick  so basically there is window server they had U.F which is forwarding data to Splunk. While I checked the _internal logs not able to find anything on search head . What should I do next 
Thanks for your answer.. Just my doubt...what is FQDN used for? In my env it is there. Where do we configure this in syslog server or UF or indexer?  I believe add-on you mentioned is not using in ... See more...
Thanks for your answer.. Just my doubt...what is FQDN used for? In my env it is there. Where do we configure this in syslog server or UF or indexer?  I believe add-on you mentioned is not using in our env as of now. Is it seriously recommended to use add-on?
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, a... See more...
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, and message is read continuously. Below is the Splunk app result. Below is the table shown in the Studio. Below is the Splunk Query.   index="eqt-e2e" | spath suite_build_name | search suite_build_name="PAAS-InstantInk-Stage-Regression-Smooth-Transition" | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message!="Test was skipped" | spath suite_build_number | search suite_build_number="*" | where (if("*"="*", 1=1, like(author, "%*%"))) | where (if("*"="*", 1=1, like(message, "%*%"))) | spath suite_build_start_time | sort - suite_build_start_time | eval suite_build_time = strftime(strptime(suite_build_start_time, "%Y-%m-%d %H:%M:%S"), "%I:%M %p") | table suite_build_name, suite_build_number, suite_build_time, author, test_rail_name, message | rename suite_build_name AS "Pipeline Name", suite_build_number AS "Pipeline No.", suite_build_time AS "Pipline StartTime (UTC)", author AS "Test Author", test_rail_name AS "Test Name", message AS "Exception Message"   @ITWhisperer 
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "privat... See more...
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "private_key": "-----BEGIN PRIVATE KEY-----\<MY PRIVATE KEY>\n-----END PRIVATE KEY-----\n", "client_email": "<splunk>@<splunk>-<blabla>, "client_id": "<MY CLIENT ID>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<MY-SPLUNK-URL>" } The service-account was recognized by the Addon so I imagined that the connection was established correctly.  When later I created the first input to collect the logs (a PubSub), when I am searching for the "Projects" connected to this service-account (in the photo) it returns me the error "External handler failed with code '1' and output ''. see splunkd.log for stderr" Actually the stderr in splunkd gives no useful information (just a generic error), so I am blocked at the moment. I also downloaded the code from Google Cloud Platform Add-on but it is not an easy debugging process, I cannot find what is the actual query that the Addon performs when clicking on "Projects".  Someone have some idea on this error?  Thanks
Hi @Vnarunart , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @splunklearner , to better understand how to use Forwarders to getting data in, please read https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Usingforwardingagents or search for related vid... See more...
Hi @splunklearner , to better understand how to use Forwarders to getting data in, please read https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Usingforwardingagents or search for related videos. Anyway, answering to your questions: 1) FQDN is an option that you can add to your server.conf but it isn't mandatory, infact I didn't used in my answer, 2) The add-on is useful to start correct parsing in UFs, you have to install it both on UF and Search Head, yes you can add the stanza in inputs.conf and the idexer address in outputs.conf, but you could have a not correct parsing. To install an add-on on UF you have two choices: copy and untar it in $SPLUNK_HOME/etc/apps on UF, deploy it using the Deployment Server (if you have), for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Updating/Aboutdeploymentserver 4) The IP address I referred is the one that probably you have in the path where the F5 logs are written: in other words, usually rsyslog (or syslog-ng, I don't know what you're using) writes logs in a path and the ip address of the sender is a part of this path, this informatin is useful to associate the correst host to your logs. Ciao. Giuseppe
I appreciate your advice.
1. What is FQDN? For what it is used? Where we need to give this?  2. Why to install add-on in UF? Do UF don't forward these logs to our indexer by giving monitor stanza in inputs.conf and indexer I... See more...
1. What is FQDN? For what it is used? Where we need to give this?  2. Why to install add-on in UF? Do UF don't forward these logs to our indexer by giving monitor stanza in inputs.conf and indexer IP address in outputs.conf in UF?  3. How can we install add-on in UF (lighter package can't open in UI)?  4. Which IP address are you referring to? Syslog server or UF?
Hi @splunklearner , let me understand: your F5 WAF is already sending its logs to your syslog server, the syslog server writes these logs in a file  and in a foled, I suppose that in thefolder p... See more...
Hi @splunklearner , let me understand: your F5 WAF is already sending its logs to your syslog server, the syslog server writes these logs in a file  and in a foled, I suppose that in thefolder path, there's the hostname or ip address of the sender. i this case, you have to install your UF on the syslog server and then install on this UF the Fortinet Fortigate Add_On for Splunk. In this add-on, you have to create a local folder and a nef conf file called inputs.conf. If the path of the log files is /data/f5_waf/<ip_address>/<year>/<month>/<day>/ and the filename is waflogs_yyyymmdd.log, in this file you have to add the following stanza: [monitor:///data/f5_waf/.../waflogs_*.log] index = your_index sourcetype = fgt_logs host_segment = 3 disabled = 0 and then restart the UF. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectories Ciao. Giuseppe
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me wit... See more...
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me with any detailed documentation or steps followed to ingest the data successfully and any troubleshooting if needed? Don't know what syslog is for me... I am very new to Splunk and learning. Apologies if it is basic question. But seriously want to learn.
Okey, so i dont now exactly where the search is. I have the datamodel.  
Hi @Vnarunart , this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it! Anyw... See more...
Hi @Vnarunart , this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it! Anyway, you could add to your Heavy forwarders a custom field with the name of the HF: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Configureindex-timefieldextraction in props.conf [default] TRANSFORMS-hf_name = my_hf_1 in props.conf: [my_hf_1] REGEX = . FORMAT = my_hf_1::my_hf_1 WRITE_META = [true] DEST_KEY = my_hf_1 DEFAULT_VALUE = my_hf_1 and then in fields.conf [my_hf_1] INDEXED=true one for each HF. Ciao. Giuseppe  
Yes.  Sorry I erased = when editing text
Compared with some of your previous questions on the same subject, this is much clearer.  In Re: Search an index for two fields with a join, I gave an example based on speculation that description wa... See more...
Compared with some of your previous questions on the same subject, this is much clearer.  In Re: Search an index for two fields with a join, I gave an example based on speculation that description was unimportant.  Now that you illustrate expected results, I no longer have to read your mind.  The illustrated results also implies that there can be a different format in description, and that fields first and last are all lower-case, while name in description uses the first-cap rule.  So, instead of using the second search as subsearch to limit the first search, simply append output from second search and do stats on events from both.     index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | append [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "Leaver Request for (?<first>\S+) (?<last>\S+) -" | rex field=description "(?<first>\S+) (?<last>\S+) Offboarding on -" | eval first = lower(first), last = lower(last) ] | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | fields identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number | stats values(*) as * min(_time) as _time BY first last   Hope this helps.
@yuanliu  You mean like as below ? TIME_PREFIX= EVENTTS=\"