All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yuanliu, Thanks for the info and I will look into that and respond with my finding.
sadly no
Yes, you can copy the URL, decode the URL parameters, and paste it into a new search, but clicking on a bookmarklet is more convenient for me. If decoding your query due to the 414 error is a common... See more...
Yes, you can copy the URL, decode the URL parameters, and paste it into a new search, but clicking on a bookmarklet is more convenient for me. If decoding your query due to the 414 error is a common occurrence, you could also make a CyberChef recipe to help. I don't know how much work it would take to make a bookmarklet that would POST the AST to the server instead. I understand that your search has a large number of calculations, but you can use a macro to make the URL shorter.  index=test example.com | `complex_calculations` | `get_geoip_data(src_ip)` | `multiple_stats_commands` In that case, each macro can contain a very large number of commands. When possible, I create macros that are reusable, but that is not always appropriate. In particular, Splunk Enterprise Security content includes a separate filter macro for each Correlation Search so that false positives can be tuned out without editing the detection core logic. Without access to your search query, it is difficult to know how to make the search smaller. In a Windows browser, you can press Ctrl-Shift-E when writing your search to show the "Expanded Search String" with the content in all of the macros being shown. These are a couple examples of how I've moved long parsing and calculation strings to macros: get_datamodel_desc(1) entropy_digits_lowercase(1)  (the Decrypt2 app is better than this macro)
Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance... See more...
Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance |search instance!=hard* |search instance!=_Total |eval FreeDiskspace=round(FreeMB/1024,2) |eval TotalDiskspace=round((FreeDiskspace/MB)*100,2) |eval FullDiskspace=round(TotalDiskspace-FreeDiskspace,2) |stats max("FreeDiskspace") as "Free Diskspace (GB)", max("FullDiskspace") as "Full Diskspace (GB)" by instance     so it's metrics I'm trying to use for it . The free space and free megabytes metrics from my windows perfmon index. I exclude instances that have hard or total in it and then eval three versions of the diskspace. this way I have the free diskspace in gb, the total diskspace and the full, so used, diskspace.  the free and the full(used) diskspace are the ones I'm having in the table, again as seen above, but when I try the pie chart it shows me not what I am looking for.  I'd like to have pie charts for each instance which lets the pie chart show the free and used space together. right now it only shows me either of two things: - only the free or the full space per instance - all free spaces from all instances in one pie chart     
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be ... See more...
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be one of the others. Anyway, the two new machines have different requirements, in terms of Disk Space: the new Indexers should have the same storage of the old server. If you don't want HA, you have to: install Splunk on the two new servers, copy the indexes.conf and the Technology Add-Ons from the old server to one of the other two that will be one of the two Indexers, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  If you want HA, you have to: install Splunk on three new servers, Configure an Indexers Cluster on the old server and two of the new ones, copy the indexes.conf and the Technology Add-Ons from the old server to the Cluster Manager, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  for more infos about Splunk architectures see at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf Ciao. Giuseppe
Well if we are unearthing this then:   Index=logins | dedup 5 login
Too many brackets - try like this <colorPalette type="expression">case (match(value,"Large Effect") OR match(value,"No"),"#ff0000",match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect")... See more...
Too many brackets - try like this <colorPalette type="expression">case (match(value,"Large Effect") OR match(value,"No"),"#ff0000",match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect"),"#00ff00",true(),"#ffffff")</colorPalette>
It works now. Thank you very much
Hi @chaturvedi , if you're speking of Windows logs, you can select the whitelists and blacklists to choose the data to index. You can find more infos at https://docs.splunk.com/Documentation/Splunk... See more...
Hi @chaturvedi , if you're speking of Windows logs, you can select the whitelists and blacklists to choose the data to index. You can find more infos at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf  otherwise, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/Routeandfilterdatad Ciao. Giuseppe
Hi @s_unny , as also @isoutamo said, you don't have enough disk space on your desktop. I hint to reduce the retention time of your indexes, starting from _internal and the most large indexes. Ciao... See more...
Hi @s_unny , as also @isoutamo said, you don't have enough disk space on your desktop. I hint to reduce the retention time of your indexes, starting from _internal and the most large indexes. Ciao. Giuseppe
I need to create an alert but the data to be fetched from the server is using a lot of license in Splunk. The data that has to be fetch are few keywords from a excel file that will  be available on ... See more...
I need to create an alert but the data to be fetched from the server is using a lot of license in Splunk. The data that has to be fetch are few keywords from a excel file that will  be available on the server. I need to install Universal Forwarder on the servers . Is it possible to make any changes at Universal forwarder level so that it can forward only the Keywords to Splunk? If not what alternative option there is to ingest the data without it using a lot of Splunk license?    
Unfortunately not - I don't understand it myself
Hi @jalbarracinklar , one of my customers had the same issue, they opened a ticet to Splunk Support and quickly solved. Ciao. Giuseppe
Hi @chimuru84 , with this search you take all the checks event in your logs and you compare them with the users list so you can define is there's some user that didn't do any check on the third part... See more...
Hi @chimuru84 , with this search you take all the checks event in your logs and you compare them with the users list so you can define is there's some user that didn't do any check on the third party. The main job is to extract the check events from your logs, and I cannot help you on this because I don't know your logs, then you can use my search to compare the results with the users list. Ciao. Giuseppe
Hi @jjohn149 , Maybe it's an impression, but the searches seem the same, probably the values ​​in the conditions are different, but I would put the three searches in one, thus also avoiding the limi... See more...
Hi @jjohn149 , Maybe it's an impression, but the searches seem the same, probably the values ​​in the conditions are different, but I would put the three searches in one, thus also avoiding the limit of 50,000 results of the subsearch; so in my example I will use condition1, condition2 and condition3 to adapt to your real need: index=osp source=xxx EVENT_TYPE IN (event_type1, event_type2, event_type3) EVENT_SUBTYPE IN (event_subtype1, event_subtype2, event_subtype3) field1=* field3 IN (field31, field32, field33) field4="" | eval DATE=strftime(strptime(_time, "%Y%m%d"), "%Y-%m-%d") | stats latest(eval(if(field3=field31))),source,"") AS example1 latest(eval(if(field3=field32))),source,"") AS example2 latest(eval(if(field3=field33))),source,"") AS example3 by field5 field6 DATE Ciao. Giuseppe
Forget all the older posts that do not have answer.  So, you are telling fellow Splunk users that you have a Windows source that that outputs 0 in page_printed field when someone printed more than 0 ... See more...
Forget all the older posts that do not have answer.  So, you are telling fellow Splunk users that you have a Windows source that that outputs 0 in page_printed field when someone printed more than 0 pages on a Windows machine, and when you print 5 pages on that Windows machine, this Windows source gives 1 in total_pages field.  Is this correct?  Because that is what your sample code would suggest.  There is nothing Splunk does in your code to change, or aggregate, or do anything to affect these field values.  If all the older posts you linked are like this, no wonder they receive no answer.  Because this is not a Splunk question. I suggest the following: Examine eventlog directly on that Windows machine to see if it has the correct values.  Troubleshoot Windows if those values are believed to be bad.  Splunk forum will not be useful to you. Compare the source events Splunk ingested from that Windows machine with your direct copy of Windows log.  Troubleshoot ingestion problem if they are different. (Getting Data In is a better forum for this.  But make sure you present evidence that the two are different.  Alternatively, engage support.)
You could use a search like this to check if the entities mapped in a service are receiving events within a specified time frame, if not you could consider them unstable and alert | inputlookup ... See more...
You could use a search like this to check if the entities mapped in a service are receiving events within a specified time frame, if not you could consider them unstable and alert | inputlookup itsi_entities append=true | rename services._key as service_key | rename title as entity | fields entity, service_key | where isnotnull(service_key) | mvexpand service_key | inputlookup service_kpi_lookup append=true | eval key=coalesce(service_key,_key) | stats values(entity) as host, values(title) as service by key | mvexpand host | dedup host | fields host | eval host=lower(host) | join type=outer host [| metadata type=hosts index=_internal | eval host=lower(host) | eval status = if(lastTime>now()-180,1,0)] | eval status=if(status=1,1,0)  
Hi @Footoasis0868, Can you confirm if your Splunk instance GUI access is HTTPS enabled? If not your splunk_stream_app_location setting on UF must be http://xxxxx:8000/en-us/custom/splunk_app_stream/... See more...
Hi @Footoasis0868, Can you confirm if your Splunk instance GUI access is HTTPS enabled? If not your splunk_stream_app_location setting on UF must be http://xxxxx:8000/en-us/custom/splunk_app_stream/ Regarding your Splunk instance itself error state,  please confirm you run set_permissions.sh to be able to start streamfwd.exe  
First, OR operator is certainly usable in tstats.  Try this:   | tstats count values(sourcetype) values(source) where index = _introspection (sourcetype = kvstore OR source="/Applications/Splunk/va... See more...
First, OR operator is certainly usable in tstats.  Try this:   | tstats count values(sourcetype) values(source) where index = _introspection (sourcetype = kvstore OR source="/Applications/Splunk/var/log/introspection/disk_objects.log")   On my laptop,  this gives count values(sourcetype) values(source) 3059 kvstore splunk_disk_objects /Applications/Splunk/var/log/introspection/disk_objects.log /Applications/Splunk/var/log/introspection/kvstore.log /Applications/Splunk/var/log/introspection/kvstore.log.1 So, the problem is elsewhere.  To troubleshoot, you need to examine data very closely.  For example,   |tstats count values(serviceType) where index="my_index" eventOrigin="api" (accountId="8674756857") |tstats count values(accountId) where index="my_index" eventOrigin="api" (serviceType="unmanaged")   and so on.  Also explore your index search output.  Without seeing actual data, it is very difficult to perform haircut on the phone line, but there doesn't appear to be a bug in this aspect.
What incident management software are you using?