All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance... See more...
Hi, so like in the screenshot - but here it is again:  |mstats max ("% Free Space") as "MB", max("Free Megabytes") as "FreeMB" WHERE index=m_windows_perfmon AND host=NTSAP10 span=1d by instance |search instance!=hard* |search instance!=_Total |eval FreeDiskspace=round(FreeMB/1024,2) |eval TotalDiskspace=round((FreeDiskspace/MB)*100,2) |eval FullDiskspace=round(TotalDiskspace-FreeDiskspace,2) |stats max("FreeDiskspace") as "Free Diskspace (GB)", max("FullDiskspace") as "Full Diskspace (GB)" by instance     so it's metrics I'm trying to use for it . The free space and free megabytes metrics from my windows perfmon index. I exclude instances that have hard or total in it and then eval three versions of the diskspace. this way I have the free diskspace in gb, the total diskspace and the full, so used, diskspace.  the free and the full(used) diskspace are the ones I'm having in the table, again as seen above, but when I try the pie chart it shows me not what I am looking for.  I'd like to have pie charts for each instance which lets the pie chart show the free and used space together. right now it only shows me either of two things: - only the free or the full space per instance - all free spaces from all instances in one pie chart     
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be ... See more...
Hi @haleyh44 , one additional information: do you want HA on your data or not? to have HA you need to create an Indexer Cluster, that requires an additiona machine (Cluster Manager) that cannot be one of the others. Anyway, the two new machines have different requirements, in terms of Disk Space: the new Indexers should have the same storage of the old server. If you don't want HA, you have to: install Splunk on the two new servers, copy the indexes.conf and the Technology Add-Ons from the old server to one of the other two that will be one of the two Indexers, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  If you want HA, you have to: install Splunk on three new servers, Configure an Indexers Cluster on the old server and two of the new ones, copy the indexes.conf and the Technology Add-Ons from the old server to the Cluster Manager, copy all the apps from the old server to the new server that will be the Search Head, configure the Search head for a Distributed Search as described in the links shared by @JohnEGones  for more infos about Splunk architectures see at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf Ciao. Giuseppe
Well if we are unearthing this then:   Index=logins | dedup 5 login
Too many brackets - try like this <colorPalette type="expression">case (match(value,"Large Effect") OR match(value,"No"),"#ff0000",match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect")... See more...
Too many brackets - try like this <colorPalette type="expression">case (match(value,"Large Effect") OR match(value,"No"),"#ff0000",match(value,"Medium Effect"), "#ffff00",match(value,"Small Effect"),"#00ff00",true(),"#ffffff")</colorPalette>
It works now. Thank you very much
Hi @chaturvedi , if you're speking of Windows logs, you can select the whitelists and blacklists to choose the data to index. You can find more infos at https://docs.splunk.com/Documentation/Splunk... See more...
Hi @chaturvedi , if you're speking of Windows logs, you can select the whitelists and blacklists to choose the data to index. You can find more infos at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf  otherwise, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/Routeandfilterdatad Ciao. Giuseppe
Hi @s_unny , as also @isoutamo said, you don't have enough disk space on your desktop. I hint to reduce the retention time of your indexes, starting from _internal and the most large indexes. Ciao... See more...
Hi @s_unny , as also @isoutamo said, you don't have enough disk space on your desktop. I hint to reduce the retention time of your indexes, starting from _internal and the most large indexes. Ciao. Giuseppe
I need to create an alert but the data to be fetched from the server is using a lot of license in Splunk. The data that has to be fetch are few keywords from a excel file that will  be available on ... See more...
I need to create an alert but the data to be fetched from the server is using a lot of license in Splunk. The data that has to be fetch are few keywords from a excel file that will  be available on the server. I need to install Universal Forwarder on the servers . Is it possible to make any changes at Universal forwarder level so that it can forward only the Keywords to Splunk? If not what alternative option there is to ingest the data without it using a lot of Splunk license?    
Unfortunately not - I don't understand it myself
Hi @jalbarracinklar , one of my customers had the same issue, they opened a ticet to Splunk Support and quickly solved. Ciao. Giuseppe
Hi @chimuru84 , with this search you take all the checks event in your logs and you compare them with the users list so you can define is there's some user that didn't do any check on the third part... See more...
Hi @chimuru84 , with this search you take all the checks event in your logs and you compare them with the users list so you can define is there's some user that didn't do any check on the third party. The main job is to extract the check events from your logs, and I cannot help you on this because I don't know your logs, then you can use my search to compare the results with the users list. Ciao. Giuseppe
Hi @jjohn149 , Maybe it's an impression, but the searches seem the same, probably the values ​​in the conditions are different, but I would put the three searches in one, thus also avoiding the limi... See more...
Hi @jjohn149 , Maybe it's an impression, but the searches seem the same, probably the values ​​in the conditions are different, but I would put the three searches in one, thus also avoiding the limit of 50,000 results of the subsearch; so in my example I will use condition1, condition2 and condition3 to adapt to your real need: index=osp source=xxx EVENT_TYPE IN (event_type1, event_type2, event_type3) EVENT_SUBTYPE IN (event_subtype1, event_subtype2, event_subtype3) field1=* field3 IN (field31, field32, field33) field4="" | eval DATE=strftime(strptime(_time, "%Y%m%d"), "%Y-%m-%d") | stats latest(eval(if(field3=field31))),source,"") AS example1 latest(eval(if(field3=field32))),source,"") AS example2 latest(eval(if(field3=field33))),source,"") AS example3 by field5 field6 DATE Ciao. Giuseppe
Forget all the older posts that do not have answer.  So, you are telling fellow Splunk users that you have a Windows source that that outputs 0 in page_printed field when someone printed more than 0 ... See more...
Forget all the older posts that do not have answer.  So, you are telling fellow Splunk users that you have a Windows source that that outputs 0 in page_printed field when someone printed more than 0 pages on a Windows machine, and when you print 5 pages on that Windows machine, this Windows source gives 1 in total_pages field.  Is this correct?  Because that is what your sample code would suggest.  There is nothing Splunk does in your code to change, or aggregate, or do anything to affect these field values.  If all the older posts you linked are like this, no wonder they receive no answer.  Because this is not a Splunk question. I suggest the following: Examine eventlog directly on that Windows machine to see if it has the correct values.  Troubleshoot Windows if those values are believed to be bad.  Splunk forum will not be useful to you. Compare the source events Splunk ingested from that Windows machine with your direct copy of Windows log.  Troubleshoot ingestion problem if they are different. (Getting Data In is a better forum for this.  But make sure you present evidence that the two are different.  Alternatively, engage support.)
You could use a search like this to check if the entities mapped in a service are receiving events within a specified time frame, if not you could consider them unstable and alert | inputlookup ... See more...
You could use a search like this to check if the entities mapped in a service are receiving events within a specified time frame, if not you could consider them unstable and alert | inputlookup itsi_entities append=true | rename services._key as service_key | rename title as entity | fields entity, service_key | where isnotnull(service_key) | mvexpand service_key | inputlookup service_kpi_lookup append=true | eval key=coalesce(service_key,_key) | stats values(entity) as host, values(title) as service by key | mvexpand host | dedup host | fields host | eval host=lower(host) | join type=outer host [| metadata type=hosts index=_internal | eval host=lower(host) | eval status = if(lastTime>now()-180,1,0)] | eval status=if(status=1,1,0)  
Hi @Footoasis0868, Can you confirm if your Splunk instance GUI access is HTTPS enabled? If not your splunk_stream_app_location setting on UF must be http://xxxxx:8000/en-us/custom/splunk_app_stream/... See more...
Hi @Footoasis0868, Can you confirm if your Splunk instance GUI access is HTTPS enabled? If not your splunk_stream_app_location setting on UF must be http://xxxxx:8000/en-us/custom/splunk_app_stream/ Regarding your Splunk instance itself error state,  please confirm you run set_permissions.sh to be able to start streamfwd.exe  
First, OR operator is certainly usable in tstats.  Try this:   | tstats count values(sourcetype) values(source) where index = _introspection (sourcetype = kvstore OR source="/Applications/Splunk/va... See more...
First, OR operator is certainly usable in tstats.  Try this:   | tstats count values(sourcetype) values(source) where index = _introspection (sourcetype = kvstore OR source="/Applications/Splunk/var/log/introspection/disk_objects.log")   On my laptop,  this gives count values(sourcetype) values(source) 3059 kvstore splunk_disk_objects /Applications/Splunk/var/log/introspection/disk_objects.log /Applications/Splunk/var/log/introspection/kvstore.log /Applications/Splunk/var/log/introspection/kvstore.log.1 So, the problem is elsewhere.  To troubleshoot, you need to examine data very closely.  For example,   |tstats count values(serviceType) where index="my_index" eventOrigin="api" (accountId="8674756857") |tstats count values(accountId) where index="my_index" eventOrigin="api" (serviceType="unmanaged")   and so on.  Also explore your index search output.  Without seeing actual data, it is very difficult to perform haircut on the phone line, but there doesn't appear to be a bug in this aspect.
What incident management software are you using?
As everyone will tell you, you are better off not using union and join, especially as your mock code suggests their similarity. The best way to get help is to follow these golden rules that I call f... See more...
As everyone will tell you, you are better off not using union and join, especially as your mock code suggests their similarity. The best way to get help is to follow these golden rules that I call four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Sorry I missed the definition here.  This is easily fixable index=owner ``` where source != host.csv ``` | rename ip as ip_address | append [inputlookup host.csv | eval source = "host.csv"] | st... See more...
Sorry I missed the definition here.  This is easily fixable index=owner ``` where source != host.csv ``` | rename ip as ip_address | append [inputlookup host.csv | eval source = "host.csv"] | stats values(owner) as owner values(source) as source by host ip_address | where source == "host.csv" | fields - source Here, I am back at using the side effect of Splunk's multivalue equality. Here is the full emulation | makeresults format=csv data="ip, host, owner 10.1.1.3, host3, owner3 10.1.1.4, host4, owner4 10.1.1.5, host5, owner5" | eval source = "not-host.csv" ``` the above emulates index=owner ``` | rename ip as ip_address | append [makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4" ``` the above emulates | inputlookup host.csv ``` | eval source = "host.csv"] | stats values(owner) as owner values(source) as source by ip_address host | where source == "host.csv" | fields - source  
Hello Cisco Security team, Firstly I'd like to say thank you for creating such a great splunk app! Now I am playing with this and found this app directly receive syslog on Splunk combined instance ... See more...
Hello Cisco Security team, Firstly I'd like to say thank you for creating such a great splunk app! Now I am playing with this and found this app directly receive syslog on Splunk combined instance itself. I would like to install this in the test network where FMC generates approx. 300-500MB syslog per hour. Assuming 700 bytes per event, it could be reaching to 200 Events per sec . https://community.cisco.com/t5/network-security/fmc-connection-events-log-size-and-location/td-p/4769765 What number of events is this application designed to handle? Any advice on performance such as utilizing multiple sockets, modifying receiving buffer size, and etc. would be appreciated. Thank you, Urikura