All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @andrewtrobec , while the collection name is indeed maintenance_calendar, the lookup definition name is itsi_maintenance_calendar, so give that a try instead. I suspect you will have to play arou... See more...
Hey @andrewtrobec , while the collection name is indeed maintenance_calendar, the lookup definition name is itsi_maintenance_calendar, so give that a try instead. I suspect you will have to play around with the Supported fields for the schedules themselves to come up. Let me know if this helps, avd
Not by using just SimpleXML elements. You might be able (but I'm not sure about that; I'm not a frontend developer) to add some custom JS to change colours of single trellis but in general it's not s... See more...
Not by using just SimpleXML elements. You might be able (but I'm not sure about that; I'm not a frontend developer) to add some custom JS to change colours of single trellis but in general it's not supported out-of-the-box.
And remember that for search-time operations it's important if you have enough permissions for the app (that should not typically be the issue here but it's worth checking out if all else fails)
Make sure the sourcetype on your data matches that for the FIELDALIAS.
Cool, never saw that streamstats thingy, I'll test it and let you know
I have the below Trellis, is there a way to change the color for each Trellis? My code from Classic Dashboard.    search Cu $t_c$ En $t_e$ | timechart span=1h avg(Value) as AvgValue_Secs by Ca... See more...
I have the below Trellis, is there a way to change the color for each Trellis? My code from Classic Dashboard.    search Cu $t_c$ En $t_e$ | timechart span=1h avg(Value) as AvgValue_Secs by Category     I want something like this:  
Additional info: you  can see the limit being hit from the job inspector:  
Thanks for pointing me to this excellent log source! I created a search which is intended to exceed the subsearch maxout limit: index=main [search index=main earliest=-1w | fields host | head 11000... See more...
Thanks for pointing me to this excellent log source! I created a search which is intended to exceed the subsearch maxout limit: index=main [search index=main earliest=-1w | fields host | head 11000 | format] (I checked the subsearch is over 11000 records)  This doesn't trigger an error or warning in _audit, _internal or from the GUI From the limits.conf documentation I see this setting: maxout = <integer> * Maximum number of results to return from a subsearch. * This value cannot be greater than or equal to 10500. * Default: 10000 So would expect my search to exceed the limit. I'm now confused as to whether I'm misinterpreted something.
Hello to all my dear friends We have SH-Cluster with 5 Search head and Enterprise Security(ES). When I want to add a new Threat List as a URL, I have to go to this address: ES APP\Configure\Data Enr... See more...
Hello to all my dear friends We have SH-Cluster with 5 Search head and Enterprise Security(ES). When I want to add a new Threat List as a URL, I have to go to this address: ES APP\Configure\Data Enrichment\Threat Intelligence Management But what happens after clicking on this page, the message Oops is displayed, can anyone help? Is the Input.local method the right method? Special Thank to Splunk  
Hello to all my dear friends In the past, I was able to import the logs of malware detected by mcafee into Splunk using Splunk DBConnect, now my question is, can I have a log of access to the centra... See more...
Hello to all my dear friends In the past, I was able to import the logs of malware detected by mcafee into Splunk using Splunk DBConnect, now my question is, can I have a log of access to the central management console of mcafee? Also, in which table are the logs related to the USB connection stored and how can I receive them in Splunk?
Technically the search msg="*firewall off*" will not match the firewall has been turned off but assuming that's understood, then this may work for you index=abc msg="*firewall off*" OR msg="*system... See more...
Technically the search msg="*firewall off*" will not match the firewall has been turned off but assuming that's understood, then this may work for you index=abc msg="*firewall off*" OR msg="*system updated*" | streamstats time_window=30s dc(msg) as msgTypes count by hostname | where (match(msg, "firewall off") AND count=1) OR (count>1 AND msgTypes=1) | table _time, hostname, msg It uses streamstats to combine events within a 30 second time window - set that to your expected range. The where clause filters only those events where it just contains firewall off OR there are multiple firewall off messages, but no system updated message.
I expect if you are using cloud you will have to put in a support ticket If you want to UPDATE existing rows in the data, then you must OUTPUT _key as with the other fields so that existing events t... See more...
I expect if you are using cloud you will have to put in a support ticket If you want to UPDATE existing rows in the data, then you must OUTPUT _key as with the other fields so that existing events that are found in the lookup will have the _key field, so it can be used to update the row. For Servers that are NOT found in the lookup, the _key will be null. When you use outputlookup use append=t so that it will add new entries if _key is null or update the existing ones where _key already exists.  
Hi, I have a 'complex' (for me at least) question.  What I want to achieve is the following: 1)  index=abc msg="*firewall off*" |table _time,hostname,msg >this will give me, for example: hos... See more...
Hi, I have a 'complex' (for me at least) question.  What I want to achieve is the following: 1)  index=abc msg="*firewall off*" |table _time,hostname,msg >this will give me, for example: hostname = machine1 msg = "the firewall has been turned off" >> I want to be triggered if someone turns off the firewall Now, the actual issue I have now is the following:  A few seconds before this event, I might get a "system update event" that updates the firewall (agent update), which is OK, and I do NOT want this event. I would need to combine both queries into 1 alert.   2)  index=abc hostname=machine1 NOT msg="*system updated*" I want to see the result of 1, but only if it was not preceeded by 2. I hope this makes sense.
Hey, Nice write-up, this is certainly an interesting subject. The whole SSL/TLS implementations seems a bit rushed and indeed not very well documented. Did you try this on a Search Head Cluster? Be... See more...
Hey, Nice write-up, this is certainly an interesting subject. The whole SSL/TLS implementations seems a bit rushed and indeed not very well documented. Did you try this on a Search Head Cluster? Because it clearly states "TLS host name validation only works for search head clusters that use App Key Value Store." on the docs page you referred to. Also in the server.conf.spec for the [kvstore] serverCert setting, it says: * Only used when Common Criteria is enabled (SPLUNK_COMMON_CRITERIA=1) or FIPS is enabled (i.e. SPLUNK_FIPS=1). My conclusion is that when KV-Store is in stand-alone mode there is no need to verify certificates since there will never be external connections in either direction. When traffic is localhost only I guess Splunk consider it "secure" enough - unless FIPS or CC is enabled. But I find it very annoying that you get a warning at each start-up that the KV-store is not "secure" even though it is stand-alone. I haven't had the opportunity to test this out in a clustered environment yet, but I will for sure let you know if I do. Please let me know if you make any progress in this matter.
Hi All, I am having an issue creating an alias simply going from DestinationPort to dest_port for SysMon EventID 3 I have tested:   index=my_index source=Sysmon | eval destinationPort=dest_p... See more...
Hi All, I am having an issue creating an alias simply going from DestinationPort to dest_port for SysMon EventID 3 I have tested:   index=my_index source=Sysmon | eval destinationPort=dest_port   I have seen in Splunk TA Sysmon that there is FIELDALIAS-dest_port=DestinationPort AS dest_port but still cannot convert DestinationPort to dest_port at Search time. Any suggestions, please? There are no other apps contradicting the precedence. Thank you!
This also worked for me. Upgrade from 9.0.4 to 9.1.1
Hello,  We are implenting splunk in our environment and right now i import every 7 days our vulnerability scan to splunk. My Task is to filter the Host and the CVE Number and get the output which h... See more...
Hello,  We are implenting splunk in our environment and right now i import every 7 days our vulnerability scan to splunk. My Task is to filter the Host and the CVE Number and get the output which host and CVE is new in the newest scan "New", which was in the old scan but is not there in the new scan "finished" and which is in both scan and is "unchanged"  The Problem is i do not have any information in the log data that the Host is finished or anything else . I have only 4 Fields: CVE ,extracted_Host, RISK Level = Critical,High and Medium and _time of course. Thats my try: index=vulnerability_scan Risk=Critical earliest=-7d latest=now | stats values(CVE) as CVE_7d by extracted_Host | appendcols [ search index=vulnerability_scan Risk=Critical earliest=now -7d latest=now | stats values(CVE) as CVE_now by extracted_Host ] | eval Status=case(isnull(CVE_7d) AND isnotnull(CVE_now), "New", isnotnull(CVE_7d) AND isnull(CVE_now), "Finished", isnotnull(CVE_7d) AND isnotnull(CVE_now), "Not Changed") | table extracted_Host, Status Problem with this i get only the output "finished" but most of the scans are in the old scan means that they are "unchanged". For me It is possible to split out the 3 outputs then i would build a Dashboard with the 3 Informations. I dont know if Splunk is the best tool to compare 2 timestamps like i will do? The Time Range is every time 7 days - maybe in the next time it will be  shorter but right now its 7 days.  Thanks for the help
That works nicely thanks @richgalloway  I just had to tweak the where to get the list of undeleted. | eventstats count(eval(Status="Deleted")) as is_deleted by Name | where is_deleted=0 | table Nam... See more...
That works nicely thanks @richgalloway  I just had to tweak the where to get the list of undeleted. | eventstats count(eval(Status="Deleted")) as is_deleted by Name | where is_deleted=0 | table Name is_deleted Status
I need to configure a service in Splunk ITSI, while creating a KPI am facing an issue. I gave a search string but when its generating a search I get an error in the result: Error in 'SearchParser': ... See more...
I need to configure a service in Splunk ITSI, while creating a KPI am facing an issue. I gave a search string but when its generating a search I get an error in the result: Error in 'SearchParser': The search specifies a macro 'aggregate_raw_into_entity' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information.   Is there any way to modify the Generated Search.