All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dose Splunk have a RHEL OVA Package for UEBA? Ubuntu is not supported in our environment and the OVA is so much easier to install.
Is there an easy/supported way to have the health checks on the Monitoring Console be run on a schedule and create alerts of any kind (e.g. email) so that it can be automatic?
I have ldap logs that give me events that look like this: Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 fd=108 ACCEPT from IP=123.4.5.67:48504 (IP=0.0.0.0:636) Feb 21 13:13:22 ldap.foo.c... See more...
I have ldap logs that give me events that look like this: Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 fd=108 ACCEPT from IP=123.4.5.67:48504 (IP=0.0.0.0:636) Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 op=0 BIND dn="" method=128 Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 op=0 RESULT tag=97 err=0 text= Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 op=1 SRCH base="dc=bar,dc=foo,dc=com" scope=0 deref=0 filter="(objectClass=*)" Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 op=1 SEARCH RESULT tag=101 err=0 nentries=1 text= Feb 21 13:13:22 ldap.foo.com slapd[28026]: conn=15306 op=2 UNBIND I've been using subsearch functionality to get the "conn" value for each BIND attempt with an empty dn, and then use those conn values to show me the the IP (123.4.5.67 in the ACCEPT line above) where that bind came from. This is an attempt that doesn't quite work host=ldap.foo.com sourcetype=openldap:access ACCESS [search host=ldap.foo.com" sourcetype=openldap:access "BIND dn=\"\"" |table conn |dedup conn |format ] the subsearch, run alone, returns exactly what I want, lets say 100 events of an empty bind dn for my chosen timeframe. However, when I then use that to feed the main search, where I believe I'm asking "find an event with those same conn values, but with the word ACCEPT in it" - then it brings me back ALL the events with ACCEPT (lets say 300 events), including those with a non-empty dn bind. I feel I'm missing something simple, but I'm at a loss and have sliced and diced several ways without joy. A nudge in the right direction would be most welcome.
I'm trying to get a piechart depicting teams to be drillable , opening up the respective team dashboard. I know that my LINK works because of the final conditional. But I can't seem to get any co... See more...
I'm trying to get a piechart depicting teams to be drillable , opening up the respective team dashboard. I know that my LINK works because of the final conditional. But I can't seem to get any condition to match up the clicked value; everything seems to be acceptable syntax wise but gets ignored. I've searched several topics, and seen several versions but nothing seems to be working for me. On Splunk Enterprise 7.3.2 <title>Team Moves for Month to Date</title> <search> <query>sourcetype=edc source="*dc_*" Direction="*" User!="User" | search CMTeam!="" | top limit=15 CMTeam</query> <earliest>@mon</earliest> <latest>now</latest> </search> <option name="charting.chart">pie</option> <option name="charting.seriesColors">[0x0066FF,0xFFCC00,0xFF3300,0x009933,0x009999,0x9900CC,0x000000,0xCC0000,0x000099,0x00CC00,0x33ADFF,0xFF00FF]</option> <option name="refresh.display">progressbar</option> <drilldown> <condition match="'click.value2' == &quot;ASAP&quot;"> <link target="_blank">https:<fullpath>/app/search/epic_data_courier__cm___asap</link> </condition> <condition match="'click.value' == &quot;Beacon&quot;"> <link target="_blank">https:<fullpath>/app/search/epic_data_courier__cm___beacon</link> </condition> <condition field="OpTime"> <link target="_blank">https:<fullpath>/app/search/epic_data_courier__cm___optime</link> </condition> <condition field="Anesthesia"> <link target="_blank">https:<fullpath>/app/search/epic_data_courier__cm___anethesia</link> </condition> <condition> <link target="_blank">https:<fullpath>/app/search/epic_data_courier__cm___optime</link> </condition> </drilldown>
The below is the text we are capturing Filename= &Filename=C%3A%5CUsers%5Cjbaile16%5CAppData%5CRoaming%5CDocumentum%5CViewed%5CSlip+End+3_Quote_AVNAN1900010_Kofax_LMPR_PL_2563798.pdf&Download=0&Dow... See more...
The below is the text we are capturing Filename= &Filename=C%3A%5CUsers%5Cjbaile16%5CAppData%5CRoaming%5CDocumentum%5CViewed%5CSlip+End+3_Quote_AVNAN1900010_Kofax_LMPR_PL_2563798.pdf&Download=0&DownloadSize=144780 HTTP/1.1" 200 3 "-" "Java/1.8.0_192" We used Regex: rex field=_raw "U(?\S{1,}.[gf])" we are able to extract Users%5Cjbaile16%5CAppData%5CRoaming%5CDocumentum%5CViewed%5CSlip+End+3_Quote_AVNAN1900010_Kofax_LMPR_PL_2563798.pdf but now we want to remove %5C from the extracted text and get the remaining text with a space instead
Running Splunk Enterprise 8.0.0 on an internal network. I went away on vacation for a few weeks with Splunk working fine and came back to it not. I'm not sure how long it had been down, and no one c... See more...
Running Splunk Enterprise 8.0.0 on an internal network. I went away on vacation for a few weeks with Splunk working fine and came back to it not. I'm not sure how long it had been down, and no one could really tell me what changed. The first problem was a service account password policy was implemented, so Splunk's service account password changed and it wasn't updated in services to launch Splunk. Once that was changed we could launch Splunk, and then received the errors. Originally we were using ADFS for SSO and it worked fine, but now when going to the site we get the error, "IDP failed to authenticate. Status Code="Responder" Check Splunkd.log for more information about the failure." I enabled web debug and it shows SSO Enabled as No. The certificate has not expired. I removed and set up SSO again following https://www.splunk.com/en_us/blog/cloud/configuring-microsofts-adfs-splunk-cloud.html Currently I just log in locally to ensure it's still collecting data. The splunkd logs show: ERROR Saml - No extra status code found in SamlResponse, Not a valid status. Could not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusMessage or no matching node foundNo value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusMessage or no matching node foundCould not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusDetail/Cause or no matching node foundNo value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusDetail/Cause ERROR UiSAML - IDP failed to authenticate request. Status Message="" Status Code="Responder" ERROR UiSAML - IDP failed to authenticate request. Status Code="Responder"
Every time I log into Splunk, I'm met with the following question: "It looks like this is your first time on this page. Would you like to take a quick tour?" It is not my first time on the page - ... See more...
Every time I log into Splunk, I'm met with the following question: "It looks like this is your first time on this page. Would you like to take a quick tour?" It is not my first time on the page - the page is bookmarked because I'm there every day. How do I get Splunk to stop asking this question? I tried completing the tour to see if that would 'trick' it into stopping, but no luck. Splunk Enterprise Version: 7.3.0 Build: 657388c7a488
I'm trying to create a dashboard that will identify when a server stops sending data through to Splunk. host="hostname1" OR host="hostname2" OR host="hostname3" | dedup host | stats count by hos... See more...
I'm trying to create a dashboard that will identify when a server stops sending data through to Splunk. host="hostname1" OR host="hostname2" OR host="hostname3" | dedup host | stats count by host So it shows no results until one of those servers stops sending data and then it'll show hostname1 for example. Or if anyone case suggests a better way of doing this overall I'd appreciate it.
Hi All, I can't put an eval before my search syntax so I am trying to use an eval-Macro called "FriendlyEval" However, I can't seem to find a way to call it! The macro | eval Friendly=$Frien... See more...
Hi All, I can't put an eval before my search syntax so I am trying to use an eval-Macro called "FriendlyEval" However, I can't seem to find a way to call it! The macro | eval Friendly=$Friend$ | lookup Friendly_Name.csv Friendly OUTPUT FullHost | lookup Friendly_Name.csv Friendly OUTPUT FullHostHSB The Search eventtype=eop_WinEventLog:Application FriendlyEval - where I need to call the macro host IN (FullHost, FullHostHSB) Message="OMIS $omis01$" OR TaskCategory="omis $omis01$" Type IN ($Type01$) | table _time host TaskCategory Type EventCode Message | sort - _time
Inconsistency with file names coming from Microsoft AV hashes is causing alerts to populate null results when firing off after a file has been quarantined. Currently, we are matching the hashes b... See more...
Inconsistency with file names coming from Microsoft AV hashes is causing alerts to populate null results when firing off after a file has been quarantined. Currently, we are matching the hashes based on a lookup that is generated by a saved search. We are having problems with our regex expression because the file names within the WinEvent Log message are not consistent. We are trying to extract the file name. Most of the time, our regex is successful in pulling out the file name for the field. However, there are times when the file name is not extracted properly due to the format of the log being different (returning values that append (GZIP) or other characters). The main problem is that we are seeing it showing the entire zip location and not just the actual file itself. Any suggestions to this would be awesome. The query being used to look at these events and extraction, then outputting to a lookup is: This report runs every 5 minutes and scans for new files and hashes. index=wineventlog (sourcetype="WinEventLog:System" OR sourcetype="WinEventLog:Microsoft-Windows-Windows Defender/Operational") EventCode=1120 (SourceName="Microsoft Antimalware" OR SourceName="Microsoft-Windows-Windows Defender") earliest=-5m@m latest=@m | rex field=Threat_resource_path "(?[^\\]$)" | stats count BY file_name ComputerName Hashes | fields - count | inputlookup append=t .csv | dedup ComputerName Hashes | outputlookup .csv Example of a log that has trouble with extraction: 02/20/2020 02:20:44 AM LogName=Microsoft-Windows-Windows Defender/Operational SourceName=Microsoft-Windows-Windows Defender EventCode=1120 EventType=4 Type=Information ComputerName= User=NOT_TRANSLATED Sid= SidType=0 TaskCategory=None OpCode=Info RecordNumber=33310 Keywords=None Message=Windows Defender Antivirus has deduced the hashes for a threat resource. Current Platform Version: 4.18.1911.3 Threat resource path: C:\Users\usergoeshere\AppData\Local\Google\Chrome\User Data\Default\Cache\f_019fa8->(GZip) Hashes: SHA1:5b803dc7f6c6ahashgoesheree0efadfbf6c5ba834; The file_name extraction wants to pull the entire f_019fa8->(GZip) as opposed to just f_019fa8.
Hi all, We have the necessity to implements alerts related to Nessus scans and Windows systems. We have seen a few of them related to Windows in the Use Case Library at Enterprise Security but I ... See more...
Hi all, We have the necessity to implements alerts related to Nessus scans and Windows systems. We have seen a few of them related to Windows in the Use Case Library at Enterprise Security but I was wondering if you have any sort of alerts that we could implement furthermore than those. Thank you in advance.
Hello; I've got this request running on my searchhead server: Job report : "This search has completed and has returned 1 101 résults by scanning 29 230 690 events in 860,672 seconds" Execution... See more...
Hello; I've got this request running on my searchhead server: Job report : "This search has completed and has returned 1 101 résults by scanning 29 230 690 events in 860,672 seconds" Execution time : 860,672 seconds aka 14 minutes and 20 seconds, running on "previous week" Here is the request: index=csmsi_supervision_active u_ci_name=PE* cmd=check_interface_traffic | fields u_ci_name, svc, ds, traffic_in_bps, traffic_out_bps, if_alias, _time | dedup svc, ds | eval Kbps_In=traffic_in_bps/1000, Kbps_Out=traffic_out_bps/1000, Periode=strftime(_time,"%Y-%V") | rex field=if_alias "(?.*_vers_(?:(?:PE)|(?:P0)|(?:P1)|(?:CE)).*)" | stats avg(Kbps_In) as "In_Moy", exactperc90(Kbps_In) as "In_Perc90", max(Kbps_In) as "In_Max", avg(Kbps_Out) as "Out_Moy", exactperc90(Kbps_Out) as "Out_Perc90", max(Kbps_Out) as "Out_Max" , values(Periode) as "Periode", latest(_time) as "_time" by u_ci_name, rex_if_alias | table Periode u_ci_name rex_if_alias In_Moy In_Perc90 In_Max Out_Moy Out_Perc90 Out_Max _time I read that using accelerated datamodels could reduce my request duration.... So I started to build one... datamodel_name : CSMSI_ARGOSS_Active_Metrics (rebuilt) node_name : metrics node_childs : icmp and traffic are just each one hiding few fields depending witch one I need or not Here is my request using datamodel : |tstats summariesonly=true values(metrics.u_ci_name) as u_ci_name, values(metrics.svc) as svc, values(metrics.ds) as ds, values(metrics.traffic_in_bps) as traffic_in_bps, values(metrics.traffic_out_bps) as traffic_out_bps, values(metrics.if_alias) as if_alias From datamodel=CSMSI_ARGOSS_Active_Metrics Where nodename=metrics u_ci_name=PE* | fields u_ci_name, svc, ds, traffic_in_bps, traffic_out_bps, if_alias, _time | dedup svc, ds | eval Kbps_In=traffic_in_bps/1000, Kbps_Out=traffic_out_bps/1000, Periode=strftime(_time,"%Y-%V") | rex field=if_alias "(?.*_vers_(?:(?:PE)|(?:P0)|(?:P1)|(?:CE)).*)" | stats avg(Kbps_In) as "In_Moy", exactperc90(Kbps_In) as "In_Perc90", max(Kbps_In) as "In_Max", avg(Kbps_Out) as "Out_Moy", exactperc90(Kbps_Out) as "Out_Perc90", max(Kbps_Out) as "Out_Max" , values(Periode) as "Periode", latest(_time) as "_time" by u_ci_name, rex_if_alias | table Periode u_ci_name rex_if_alias In_Moy In_Perc90 In_Max Out_Moy Out_Perc90 Out_Max _time but do not give result (0 results found) in "8 seconds executing time" according to search.log My question is, where is my issue? ps1: 1st time I write this kind of request ps2: I've got other request running on "previous month" and aborting after +2hours by timeout Thanks for helping
Hi, I'm trying to give an option to upload a csv file in a dashboard, and move the data directly into a index or kvstore using REST API. Any suggestions how to achieve it ?
Hi, I want to know, I do not actually update my datetime.xml and I want to know if I update now for the data. Do I need to re-ingest, reindex or restore all my server to 31 december 2019 and rei... See more...
Hi, I want to know, I do not actually update my datetime.xml and I want to know if I update now for the data. Do I need to re-ingest, reindex or restore all my server to 31 december 2019 and reingest the data if I haved. I still version 7.1.9, if I update to 7.3.4 do I still have problem with my data. Thx
Dear Gents, I am quering some ITSM system with Splunk and monitoring scheduled resulting task statuses which are queried periodically. I would like to achieve following use case and incorporate i... See more...
Dear Gents, I am quering some ITSM system with Splunk and monitoring scheduled resulting task statuses which are queried periodically. I would like to achieve following use case and incorporate it properly into dashboard and results highlight by distinct colour in dashboard. task A fails in 20 hour window once then highlight resulting status by yellow task A fails in 20 hour windows twice then color resulting status by red My idea is make of use SPL eval which creates new field with count of hours when end time occured. Then use where for comparision of relative time stamps. Any idea about proper SPL high level sequence would be highly welcomed. Many thanks Regards, Andy
Hi all, I have a problem with return events from OracleDB. I have column type DATE (e.g 2020-02-21 13:00:10), but in Splunk all events is 2020-02-21 13:00:10.0 (0 miliseconds). How to get rid ... See more...
Hi all, I have a problem with return events from OracleDB. I have column type DATE (e.g 2020-02-21 13:00:10), but in Splunk all events is 2020-02-21 13:00:10.0 (0 miliseconds). How to get rid of these zeros during data import. In SPL language I know how to do it.
Windows 10 64-bit JavaSE 1.8.0_192 splunk-sdk-java-1.6.5.jar opencsv-2.3.jar Only for certain dates/data does this seem to occur, very frustrating. Does not appear to be a data volume issue, la... See more...
Windows 10 64-bit JavaSE 1.8.0_192 splunk-sdk-java-1.6.5.jar opencsv-2.3.jar Only for certain dates/data does this seem to occur, very frustrating. Does not appear to be a data volume issue, larger row counts can be successfully extracted. Leads me to believe it's data-dependent, but cannot determine the cause. Investigating data via splunk web GUI was inconclusive. while ((event = resultsReader.getNextEvent()) != null) throws java.lang.ArrayIndexOutOfBoundsException: 3 Does the "3" value provide any insight? Detailed exception data attached as graphic image.
Hi, I am new to splunk. I have been trying to connect my skyhub to Home Monitor for a while, but I think, because I have the latest version of the router, is not supported. Any suggestions on how to ... See more...
Hi, I am new to splunk. I have been trying to connect my skyhub to Home Monitor for a while, but I think, because I have the latest version of the router, is not supported. Any suggestions on how to do it.
Hi I am trying to override my current sourcetype to create multiple source types based on key matching patterns. But the settings are not working, my settings are as follows, pls let know me where I ... See more...
Hi I am trying to override my current sourcetype to create multiple source types based on key matching patterns. But the settings are not working, my settings are as follows, pls let know me where I go wrong, pros. conf [transaction:logs] BREAK_ONLY_BEFORE_DATE = true SHOULD_LINEMERGE = true TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TRANSFORMS - sourcetypeoverwrite =receipts, businesstransaction transforms.conf [receipts] DEST_KEY = MetaData:Sourcetype REGEX = (%retail) FORMAT = sourcetype::transaction:logs [businesstransaction] DEST_KEY = MetaData:Sourcetype REGEX = (%transaction) FORMAT = sourcetype::transaction:logs I also tried rule:: option but its not working as well in my props.conf [rule::receipts] sourcetype = receipt MORE_THAN_0 = (%retail) [rule::businesstransaction] sourcetype = businesstransaction MORE_THAN_0 = (%transaction) Yet am not getting results in either of methods. Is there any better way to look into this.
I have a Splunk cluster consisting of a Master , 2 search-heads and 2 indexers. The indexers receive logs from forwarders as well as through AWS plugin. How do I achieve 0 (near zero) downtime durin... See more...
I have a Splunk cluster consisting of a Master , 2 search-heads and 2 indexers. The indexers receive logs from forwarders as well as through AWS plugin. How do I achieve 0 (near zero) downtime during upgrade of this cluster and ensure no data-loss ?