All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @verbal_666, You can see related documentation below about timestamp information. The events that missing date_* fields may not have extracted time inside.   https://docs.splunk.com/Documentatio... See more...
Hi @verbal_666, You can see related documentation below about timestamp information. The events that missing date_* fields may not have extracted time inside.   https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Usedefaultfields#Use_default_fields Only events that have timestamp information in them as generated by their respective systems will have date_* fields. If an event has a date_* field, it represents the value of time/date directly from the event itself. If you have specified any timezone conversions or changed the value of the time/date at indexing or input time (for example, by setting the timestamp to be the time at index or input time), these fields will not represent that.  
Hi there. Did you saw in many events, exploding the event to detail, the _time field has a "+" icon on its side? Exploding it, give the detail of created _time field, What's that? I... See more...
Hi there. Did you saw in many events, exploding the event to detail, the _time field has a "+" icon on its side? Exploding it, give the detail of created _time field, What's that? In other events i can't see the "+" icon, also on same server/path/log, Is it some kind of, "+" == I, SPLUNK INDEXER, ELABORATED THE TIMESTAMP WITH MY ALGORITHMS BY MYSELF IN THIS WAY clean, no "+" == automatic timestamp calculation, no elaboration, i found it yet cooked ?   Thanks.
I think I understand -  try this search to create a table with fields: _time, percentage and one or more columns based on the value calculated each hour: | gentimes start=-7 | eval sample=random... See more...
I think I understand -  try this search to create a table with fields: _time, percentage and one or more columns based on the value calculated each hour: | gentimes start=-7 | eval sample=random()%100 | eval perc=random()%50 | rename starttime as _time | append[|makeresults | eval sample=100, perc=45| table _time, sample, perc] | timechart span=1d max(sample) as name, avg(perc) as "percentage" ``` Calculate how we name the fields based on the value of: name ``` | eval rename_field_to=if(name=100,"C","N/A") | eval "The Sample Yields {rename_field_to}" = name | fields - rename_field_to, name   This will create three or four columns: _time = time percentage = hourly average of the perc field The Sample Yields C  =  If the max for that hour was 100 The Sample Yields N/A = If the max for that hour was not 100 If you only want "The Sample Yields C" or nothing, then you can filter out with a | search name="C" after the timechart command. The main SPL is :  | eval "The Sample Yields {rename_field_to}" = name That will allow you to name a field using the value of another field.   If you want NA to simply be N/A then you can do a rename: | rename "The Sample Yields N/A" as "N/A" Is that closer to what you were after?      
Ok, here's a quick fix to stop any dashboards loading after a page refresh: <condition value="dash_a"> <link target="_blank">/app/search/dash_a</link> <set token="form.link_dash"></set> ... See more...
Ok, here's a quick fix to stop any dashboards loading after a page refresh: <condition value="dash_a"> <link target="_blank">/app/search/dash_a</link> <set token="form.link_dash"></set> <set token="link_dash"></set> </condition> This will only create a new window with a dashboard if the token matches dash_a, and  do nothing if it's blank. Once we load the dashboard, we reset the token (both form.token and token) to an empty string. That way if the page reloads, we do nothing. We can also make the condition statement a bit smarter. If you set the choice values to be the name of the dashboard you want to load, we can do this: Final Version <form version="1.1" theme="light"> <label>Dash_C</label> <fieldset submitButton="false"> <input type="link" token="link_dash"> <label>View other Dashboard:</label> <choice value="dash_a">Dashboard 1 ↗</choice> <choice value="dash_b">Dashboard 2 ↗</choice> <choice value="dash_c">Dashboard 3 ↗</choice> <change> <condition value=""> </condition> <condition> <link target="_blank">/app/search/$link_dash$</link> <set token="form.link_dash"></set> <set token="link_dash"></set> </condition> </change> </input> </fieldset> <row><panel depends="$CSS$"><html><style> .splunk-linklist{width:fit-content!important;} .splunk-linklist button{ min-width: 120px;} .splunk-linklist button span{ -webkit-box-pack: left; justify-content: left;-webkit-box-align: left; align-items: left;} .splunk-linklist button{background-color: #dddddd82;margin: 4px 2px 0px 0px; transition: 0.3s;} .splunk-linklist button:hover {background-color:#007abd!important;color:white!important;}</style></html></panel> </row> </form> The condition block will do nothing if the link_dash token is blank, but will load the dashboard in $link_dash$ if it's not blank. It then sets the token to "" so it won't load the dashboard again on a refresh. By using the <condition> as above,  it allows you to add as many dashboards as you want via the dropdown UI without needing to update the code.    
I am also having issue to install UF v9.2.1 on one of my servers. . Did a uninstallation of old version and install the new installer with admin rights. Disable antivirus also . But still failed  ... See more...
I am also having issue to install UF v9.2.1 on one of my servers. . Did a uninstallation of old version and install the new installer with admin rights. Disable antivirus also . But still failed  Any advise what can I do next???  
Hello @ITWhisperer , Thank for your quick response, truly appreciate it. But it's not working giving the entire events of source type accountA  
Create a composite field with the two labels concatenated and count by that I am not sure how to create composite filed, could you please advice on this please  
Hi @yuanliu , thank you so much, it worked
Thankyou @richgalloway  it was pretty simple.
I am planning on teaching others how to use Splunk to search through data, similar to the Splunk boss of the soc challenges- https://github.com/splunk/botsv3 Similarly, I would like to export the d... See more...
I am planning on teaching others how to use Splunk to search through data, similar to the Splunk boss of the soc challenges- https://github.com/splunk/botsv3 Similarly, I would like to export the data I generated in my Splunk instance to then have students import into there's to follow along. The only way I can figure out how to do this is from running a search and using the export feature. Is there a recommendation for this? 
Yes, that is very odd that it just removes fields if the viz space is not big enough and certainly not intuitive!
Thank you so much for the explanations - appreciate it!   it display all the fields now if I make the size of the piechart super large... it still doesn't display all the fields if there are two p... See more...
Thank you so much for the explanations - appreciate it!   it display all the fields now if I make the size of the piechart super large... it still doesn't display all the fields if there are two piecharts side by side in the same row of a dashboard though, if the smallest percentage is too small. but it works well if it occupies an entire row.   Also to me this is still a bit un-intuitive for the users...
Nor did I when I answered 2 years ago 
Whenever you use a field name in an 'eval' expression (where requires an eval expression), you need to use single quotes around the field name if the field name is on the right hand side of the eval ... See more...
Whenever you use a field name in an 'eval' expression (where requires an eval expression), you need to use single quotes around the field name if the field name is on the right hand side of the eval statement and contains non-simple characters (in this case the full stop), so  | where 'event.Properties.errMessage' != "OK" Note the sometimes confusing use of single and double quotes used, for example this statement | eval event.Properties.errMessage="Hello" does NOT need quotes on the left hand side of the statement. Where necessary, the left hand side use of quotes requires double quotes, so if your field name has a space, you would need | eval "My Field With Spaces"="Hello"  
That is so funny I never even thought of that   @Shan could you confirm if @bowesmana's reply answers your question and accept it as the answer if so
    I tried,, but the search returning no result.         
Hey man, Thanks for the quick reply, I've installed the UF on the DC. So i need to change some configs on the DC then under the Splunk folder to point back to the Splunk server? What changes need to... See more...
Hey man, Thanks for the quick reply, I've installed the UF on the DC. So i need to change some configs on the DC then under the Splunk folder to point back to the Splunk server? What changes need to be made, i'm guessing its that notepad file under the splunk /etc/system/local. I see outputs file is set to use my Splunk Server and DeploymentClient is pointing to Splunk server IP also. Service is running on DC and firewall rules checked also. Do i need to configure something else on the Splunk server that isn't the receiver landing page? In the Choose logs from this host field(under remote sources), when i chuck the IP of the DC in there it just keeps saying unable to get WMI classes from host. Do i even need to fill out this page? Under forwarder management it says " no clients or apps are currently available on this deployment server" Does that mean i need to install forwarder on the server too? ....and yes just some frustration there.  Cheers
I need to ask if i want to move splunk servers to another data store (vsphere)   would this affects anything regarding splunk it self?    
Can you give a bit more about your query because having to use appendpipe to get dates filled in seems a little unusual. This example | makeresults | eval count=split("2,0,2,0,0,0,0",",") | mvexpand... See more...
Can you give a bit more about your query because having to use appendpipe to get dates filled in seems a little unusual. This example | makeresults | eval count=split("2,0,2,0,0,0,0",",") | mvexpand count | streamstats c | eval _time=now() - ((7 - c) * 86400) | fields - c will produce this single viz whether or not you add | timechart span=1d max(count) as count  
This will partly depend on what proportion of the total data you are looking to exclude. If the excluded proctitles are a significant proportion of the data, then using a post process where or regex ... See more...
This will partly depend on what proportion of the total data you are looking to exclude. If the excluded proctitles are a significant proportion of the data, then using a post process where or regex clause may not perform so well, but you will have to play with that. Setting tags will still involve a search time extraction to evaluate the tag, so under the hood the search is being done. You might want to look at the TERM directive - see this link  https://conf.splunk.com/files/2020/slides/PLA1089C.pdf You will need to understand what constitutes a TERM in your data and whether that will work for your use case, but that can significantly improve performance. When you are looking at this type of performance issue, go look at the job properties in the job inspector - look at scan count values - the more you scan, the more data you are having to check. You could go down the indexed extraction route where you set a field at index time, but that is somewhat static and if you need to exclude a new proctitle, then that won't help, but it will improve search performance at the cost of index performance and disk space.