All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Excuse me, can you tell me how to use calculated field for renaming host (for example change "WIN-KLV1NNUJO8P" to "mydashboard"? I'm new to splunk and learning
If you have the episodeID, you can link directly to it: https://YOURSPLUNKSERVER:8000/en-US/app/itsi/itsi_event_management?earliest=-7d%40h&latest=now&form.earliest_time=-7d%40h&form.latest_time=n... See more...
If you have the episodeID, you can link directly to it: https://YOURSPLUNKSERVER:8000/en-US/app/itsi/itsi_event_management?earliest=-7d%40h&latest=now&form.earliest_time=-7d%40h&form.latest_time=now&episodeid=YOUREPISODEID Please be aware of the time span, if episode is older than 7d it won't be found because in THIS link -7d is set.
You could do something like this: | makeresults format=json data="[{ \"iphone\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"80\"... See more...
You could do something like this: | makeresults format=json data="[{ \"iphone\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"80\", \"review\" : \"OK\" }, \"laptop\": { \"price\" : \"90\", \"review\" : \"OK\" } },{ \"tv\": { \"price\" : \"50\", \"review\" : \"Good\" }, \"desktop\": { \"price\" : \"60\", \"review\" : \"OK\" } }]" | fields _raw _time | eval p_name=json_array_to_mv(json_keys(_raw)) | streamstats count as row | eval flag = pow(2, row - 1) | mvexpand p_name | eval {p_name}=flag | fields - flag row p_name | stats sum(*) as * Fields with 1 are only in the first event, fields with 2 are only in the second event (missing from the first event), and fields with 3 are in both events. This also works for more events as the sums are essentially binary flags for which events the fields come from, e.g. for 3 events, 7 would be all event, 5 would be first and third. etc.
Yes, I restarted the SplunkForwarder service
Hi @chakavak, outputs.con must not be changed! did you restarted Splunk on the UF after change? Ciao. Giuseppe
Hi @gcusello Thank you for your reply, I changed the hostname in server.conf, but in forwarder inputs.conf not there in the mentioned path, I have outputs.conf!!!! It also doesn't work when I just ... See more...
Hi @gcusello Thank you for your reply, I changed the hostname in server.conf, but in forwarder inputs.conf not there in the mentioned path, I have outputs.conf!!!! It also doesn't work when I just change the server.conf file. 
Hi @SplunkDash , please use this: https://splunkbase.splunk.com/app/742 remember to copy the inputs.conf file in a local folder (to create) and to enable (disabled = 0) the inputs you need because,... See more...
Hi @SplunkDash , please use this: https://splunkbase.splunk.com/app/742 remember to copy the inputs.conf file in a local folder (to create) and to enable (disabled = 0) the inputs you need because, by default, all the inputs are disabled. Ciao. Giuseppe
Hi @chakavak, you could manually rename hostaname in $SPLUNK_HOME\etc\system\local\server.conf and $SPLUNK_HOME\etc\system\local\inputs.conf of your forwarder to have thes values in your logs. Othe... See more...
Hi @chakavak, you could manually rename hostaname in $SPLUNK_HOME\etc\system\local\server.conf and $SPLUNK_HOME\etc\system\local\inputs.conf of your forwarder to have thes values in your logs. Otherwise, you could rename it with a calculated field at search time. Ciao. Giuseppe
Hi @Shihua, it isn't a good idea because many commands as timechart run using _time, in addition you should do this for all sourcetypes! and I'm not sure that's possible! then why do you want to do... See more...
Hi @Shihua, it isn't a good idea because many commands as timechart run using _time, in addition you should do this for all sourcetypes! and I'm not sure that's possible! then why do you want to do this? you have these fields in epochtime, so you can use them for calculations and _time is automatically displayed in human readable, so why? if you don't like the fieldname _time, you can rename it at the end of the searches. You could create at index time a new field from them, but why? Ciao. Giuseppe
I have installed splunk and added windows systems to splunk through universal forwarder, but I have a problem with default system names, these names confusing me when I check their status, I want to ... See more...
I have installed splunk and added windows systems to splunk through universal forwarder, but I have a problem with default system names, these names confusing me when I check their status, I want to consider alias name or rename hostname so that I diagnose system with it's name in search.  For example, I want to change hostname "WIN-KLV1NNUJO8P" to "mydashboard" . Please help me, I can't find answer for this problem and solutions that I found in the internet not working
Hi @athul_r_m, your request is just a little too vague! could you better describe your data? e.g. fields to display, API execution time fieldname, etc... Anyway, to sort in descrnding order you h... See more...
Hi @athul_r_m, your request is just a little too vague! could you better describe your data? e.g. fields to display, API execution time fieldname, etc... Anyway, to sort in descrnding order you have to see the options of the sort command (https://docs.splunk.com/Documentation/SCS/current/SearchReference/SortCommandOverview index=your_index | sort -API_execution_time | table API_execution_time field1 field2 field3 Ciao. Giuseppe  
Can some one help me with query for getting logs in descending order based on API execution time which printed on logs.
Hi , Sorry , I missed to mention if they are of same event or different. The answer is they are from different events. The standalone query with makeresults is working as expected. I used the mai... See more...
Hi , Sorry , I missed to mention if they are of same event or different. The answer is they are from different events. The standalone query with makeresults is working as expected. I used the main part of your query with mvmap and tweaked it as below : index=product_db time="1706589466.725491" | eval data1=if(match(time,"1706589466.725491"),_raw,0)| eval p_name_1=json_array_to_mv(json_keys(data1))|table p_name_1 |appendcols [ search index=product_db time="1705566003.777518" |eval data2=if(match(time,"1705566003.777518"),_raw,0) | eval p_name_2=json_array_to_mv(json_keys(data2)) ] | eval p_unique = mvmap(p_name_1, if(isnull(mvfind(p_name_2, "^".p_name_1."$")), p_name_1, null())) | eval p_missing = mvmap(p_name_2, if(isnull(mvfind(p_name_1, "^".p_name_2."$")), p_name_2, null())) | table p_unique p_missing   Please let me know if you have any better suggestion to write this query.   Thanks
This is another that I tried, but it doesn't seem to be working.Will the masking apply to fields that have already been extracted during the search process?  
Hi everyone, I would want to ask if I can create a field alias for _indextime and _time then set this alias as a default field for all sourcetype?
Yes. I know. But what I'm wondering is if you actually use <exsl:document>, and if you do, how you use it formally, and if you don't mind blocking packets containing that namespace.
I would suggest not using mvexpand, as in your example search - in your example you will triple the raw events. Can you provide a sample of the inputs you want to be able to select DS makes a multi... See more...
I would suggest not using mvexpand, as in your example search - in your example you will triple the raw events. Can you provide a sample of the inputs you want to be able to select DS makes a multiselect token= a,b,c so you can use this logic in your search that wants to use the token index=your_index [ | makeresults | fields - _time | eval <your_field_name>=split("$token$", ",") | mvexpand <your_field_name> ] How were you using prefix/suffix/delim in your old dashboard?  
https://advisory.splunk.com/advisories/SVD-2023-1104
https://docs.splunk.com/Documentation/Splunk/9.1.1/Viz/tokens#Token_filters
Hello. I am a security researcher analysing the CVE-2023-46214 vulnerability.  I think this vulnerability have a problem using exsl:document. So I want to block packets containing exsl:document, bu... See more...
Hello. I am a security researcher analysing the CVE-2023-46214 vulnerability.  I think this vulnerability have a problem using exsl:document. So I want to block packets containing exsl:document, but do you use exsl:document in real life? Is this a feature that is officially supported by Splunk?