All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @onurragacc, Using source as a literal example: index=foo source IN (source1 source2) | table rule1 Explanation | outputlookup rule_lookup the rule_lookup lookup will only contain rows from the... See more...
Hi @onurragacc, Using source as a literal example: index=foo source IN (source1 source2) | table rule1 Explanation | outputlookup rule_lookup the rule_lookup lookup will only contain rows from the search results, both updated and not updated. No additional logic is required. Can you provide an example in SPL with corresponding events and lookup data?
@AL3Z wrote: # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $S... See more...
@AL3Z wrote: # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default # into ../local and edit there. Stop right there!  These comments are very important and yet you've chosen to ignore them by editing a file that should not be modified.  What other instructions have you disregarded? The configs shown look good to me, but I am not familiar enough with Windows to know if there's something there that shouldn't be there or vice versa.
0. Please post your searches and such in a preformatted paragraph or a code block. Makes it easier to read. 1. There are no miracles. If repeated search yields different results (and it doesn't cont... See more...
0. Please post your searches and such in a preformatted paragraph or a code block. Makes it easier to read. 1. There are no miracles. If repeated search yields different results (and it doesn't contain any random element), something must be varying across the separate runs - either you're running it over different time windows (for example - "earliest=-1m latest=now" will contain different events depending on when it's run) or your events for given time window change (maybe there are events backfilling the index due to latency problems). Sometimes you might have connectivity problems and not be getting all results from all indexers (but that should throw a warning) or have performance problems and have your searches finalized before they fully complete). 2. You are forcefully overwriting the _time field (honestly, I have no idea why - you could as well just use another field name; if you want it for automatic formatting you could rename it at the very end of your search). 3. As @yuanliu already pointed out - there seems to be a problem with the quality of your data. A process of data onboarding includes finding the proper "main" timestamp within the event (some events can have multiple timestamps; in such case you need to decide which is the primary timestamp for the event) and making sure it's getting parsed out properly so that the event is indexed at the proper point in time That's one of the most important if not the most important part of onboarding the events - you must know where to look for your data. Otherwise you have no way of knowing what data you have, how much data you have, where it is and how to look for it. 4. Yes, latest(X) looks for the latest value of field X. It doesn't mind any other fields. So latest(X) and latest(Y) will show you latest seen values of respectably fields X and Y but they don't have to be from the same event. If one event had only field X, and other one had only field Y, you'd still get both of them in your results since either of them was the last occurrence of respsective field.
TA_for_indexers contains only the installation part needed for indexers (definition of indexes) that are needed for ES to work. But it's just so that ES on its own is "fully installed". Apart from t... See more...
TA_for_indexers contains only the installation part needed for indexers (definition of indexes) that are needed for ES to work. But it's just so that ES on its own is "fully installed". Apart from that Splunk (and ES too) needs to know how to work with specific types of data provided by various kinds of sources. That's what TAs for those sources are for. So yes, if you have 40 _types_ of devices, you might need 40 different TAs. Often TAs contain definitions, parsing rules and CIM-mappings for multiple sources from a single vendor (so you might not need to have a separate TA for every single type of Juniper firewalls, just a single TA able to parse JunOS events).
So if i have 50 devices i need to install the TA on all 50? lets assume cisco, fortinet, palo alto ... So its not enough installing TA on idexers and already such devices are sending the logs to the... See more...
So if i have 50 devices i need to install the TA on all 50? lets assume cisco, fortinet, palo alto ... So its not enough installing TA on idexers and already such devices are sending the logs to the indexer?
Hello, @richgalloway @PickleRick , The regex I used seems effective, but it's unexpectedly blocking all my Windows security events. I've checked the regex, and I haven't specifically blacklisted any... See more...
Hello, @richgalloway @PickleRick , The regex I used seems effective, but it's unexpectedly blocking all my Windows security events. I've checked the regex, and I haven't specifically blacklisted any Windows executables. Could you assist me in analyzing the below list of blacklisted executables? # Copyright (C) 2019 Splunk Inc. All Rights Reserved. # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default # into ../local and edit there. # ###### OS Logs ###### [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" blacklist2 = EventCode="5447|4634|5156|4663|4656|5152|5157|4658|4673|4661|4690|4932|4933|5158|4957|5136|4674|4660|4670|5058|5061|4985|4965" blacklist3 = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunkd.exe)|.+(?:SplunkUniversalForwarder\\bin\\btool.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-powershell.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-winprintmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-regmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-netmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-admon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-MonitorNoHandle.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-winevtlog.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-perfmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunkd.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-wmi.exe)|.+(?:Windows Defender Advanced Threat Protection\\SenseCncProxy.exe)|.+(?:Windows Defender Advanced Threat Protection\\SenseCM.exe)|.+(?:Windows Defender Advanced Threat Protection\\MsSense.exe)|.+(?:Microsoft\\Windows Defender\\Platform\\.*\MsMpEng.exe)|.+(?:Microsoft\\Windows Defender\\Platform\\.*\\MpCmdRun.exe)|.+(?:Microsoft\\Windows Defender Advanced Threat Protection\\Platform\\.*\\MsSense.exe)|.+(?:Microsoft\\Windows Defender\\Platform\\.*\\MsMpEng.exe)|.+(?:Microsoft\\Windows Defender Advanced Threat Protection\Platform\.*\\SenseIR.exe)|.+(?:Microsoft\\Windows Defender Advanced Threat Protection\\DataCollection\\.*\\OpenHandleCollector.exe)|.+(?:ForeScout SecureConnector\\SecureConnector.exe)|.+(?:Windows Defender Advanced Threat Protection\\SenseIR.exe)|.+(?:Rapid7\\Insight Agent\\components\\insight_agent\\.*\\get_proxy.exe)|.+(?:Rapid7\\Insight Agent\\components\\insight_agent\\.*\\ir_agent.exe|.+(?:Tanium\\Tanium Client\\TaniumCX.exe)|.+(?:AzureConnectedMachineAgent\\GCArcService\\GC\\gc_worker.exe)|.+(?:AzureConnectedMachineAgent\\GCArcService\\GC\\gc_service.exe)|.+(?:WindowsPowerShell\\Modules\\gytpol\\Client\\fw.*\\GytpolClientFW.*.exe)|.+(?:AzureConnectedMachineAgent\\azcmagent.exe)|.+(?:Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe)" blacklist4 = EventCode="4688" Message="(?:New Process Name:).+(?:Tanium\\Tanium Client)" blacklist5 = EventCode="4688" Message="(?:Creator Process Name:).+(?:Tanium\\Tanium Client)" renderXml=true index = es_winsec Thanks...
So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30... See more...
So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30+ seconds i end up on this site https://dev.splunk.com/enterprise/dev_license/error that has this error "$Request failed with the following error code: 500" I've already sent this to devinfo@splunk.com but gotten no response.  Anyone else hitting this issue?
2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or tha... See more...
2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or that field was not present when the record was ingested into splunk -- my data has both cases) This is to say that you have bad data.  Bad data leads to bad results.  You need to find a way to fix your data, or at least fix how you extract from my_timestamp if you cannot fix this field in data.
Sorry, I assumed the dataset "botsV1" was very widely known. Thankyou for responding to my question however I have been able to solve the issue.
Hi @Raymond2T, In Simple XML dashboards, you can use the deprecated but still functional classField option. For example, if you have a field named range with values green and red, use:   <single> ... See more...
Hi @Raymond2T, In Simple XML dashboards, you can use the deprecated but still functional classField option. For example, if you have a field named range with values green and red, use:   <single> <search> <!-- ... --> </search> <option name="classField">range</option> <!-- ... --> </single>   When trellis is enabled, the resulting div element will have an additional class of either green or red, depending on the value of that row's range field:   <div id="singlevalue" class="single-value shared-singlevalue red" ...>   You can adjust your stylesheet to include the green and red classes as desired:   <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } .single-value.red rect { animation: blink 0.8s infinite; } </style>    
@gillettepd  I've not used or written code (in C) for Domino since it was still a Lotus product. During IBM's tenure, JDBC Access for IBM Lotus Domino <https://www.openntf.org/main.nsf/project.xsp?r... See more...
@gillettepd  I've not used or written code (in C) for Domino since it was still a Lotus product. During IBM's tenure, JDBC Access for IBM Lotus Domino <https://www.openntf.org/main.nsf/project.xsp?r=project/JDBC%20Access%20for%20IBM%20Lotus%20Domino> may have been a viable option for querying LOG.NSF, DOMLOG.NSF, etc. using Splunk DB Connect. The JDBC solution may work with HCL Domino 11.x, but a quick search suggests it will not work with 12.x. The JDBC driver may also be incompatible with DB Connect, depending on its implementation of expected JDBC interfaces. That said, give it a try! I would evaluate OData access <https://opensource.hcltechsw.com/Domino-rest-api/tutorial/odata/index.html>; however, there is no OData add-on for Splunk. If you're comfortable with Python, REST API Modular Input <https://splunkbase.splunk.com/app/1546> is a (mostly) fee-based add-on that may simplify writing an OData wrapper. Splunk Add-on Builder <https://splunkbase.splunk.com/app/2962> is always an option, but it exposes the Splunk API in a way that may complicate your solution.
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:0... See more...
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:00") | stats latest(my_timestamp) latest(_time) latest(my_count) as my_count by project I see behaviour that surprised me: 1. If I repeatedly issue the query, the value of my_count varies 2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or that field was not present when the record was ingested into splunk -- my data has both cases) 3. In the output of the search, the value of my_timestamp returned does not always come from the same ingested record as my_count. 4. In fact, the value of my_timestamp in the search output is always taken from the same single record: it doesn't change when I repeatedly issue the query. I guess 1. and 2. are because "null" (or empty or some similar concept) _time values aren't really expected and happen to sort latest. I guess 3. is because function `latest` operates field-by-field, and is not selecting a whole row -- combined again with the fact that some _time values are null. 4. I don't understand, but perhaps is a coincidence and is not reliably true in general outside of my data set etc., I'm not sure. What I really want is to find the ingested record with the latest value of `my_timestamp` for a given `project`, so I can present fields like `my_count` by `project` in a "most recent counts" table. I don't really want to operate on individual fields' "latest" values as in the query above, but rather latest entire records. How can I best achieve that in splunk?
More words please. What do you want to do? What do you mean by "cisco servers" and "cisco console"? And what does it have to do with Splunk?
If I recall it worked when I sent test logs from my client app which is instrumented with the Faro Web SDK library.  I didn't go back to compare the log contents against the otlp spec log to figure o... See more...
If I recall it worked when I sent test logs from my client app which is instrumented with the Faro Web SDK library.  I didn't go back to compare the log contents against the otlp spec log to figure out the difference since it was working.  I think it was silently failing for some reason.
In one example, the brand field is terminated by a space rather than an ampersand so add \s to the regex. index = dd | rex field=_raw "brand=(?<brand>[^&\s]+)" | rex field=_raw "market=(?<market>[^... See more...
In one example, the brand field is terminated by a space rather than an ampersand so add \s to the regex. index = dd | rex field=_raw "brand=(?<brand>[^&\s]+)" | rex field=_raw "market=(?<market>[^&]+)" | rex field=_raw "cid=(?<cid>\d+)" | table brand, market, cid  
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlags... See more...
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&size=3%7C131%7C1%7C1914&trackingid=541820668241808 HTTP/1.1" 200 0 47936  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36" "10.119.25.242:59364" "10.119.80.158:61038" x_forwarded_for:"108.172.104.40, 184.30.149.136, 10.119.155.154, 10.119.145.54,108.172.104.40,10.119.112.127, 10.119.25.242" x_forwarded_proto:"https" vcap_request_id:"faa6d72c-4518-4847-47b2-0b340bb27173" response_time:0.455132 gorouter_time:0.000153 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"41" instance_id:"5698b714-359f-4906-742e-2bd7" x_cf_routererror:"-" x_b3_traceid:"042db9308779903a607119a204239679" x_b3_spanid:"b6e3d71259e4c787" x_b3_parentspanid:"607119a204239679" b3:"1188a5551d8c70081e69521568459a30-1e69521568459a30" msg: abc.com - [2023-11-24T18:38:25.779609363Z] "GET /products/?brand=hhh&market=us&cid=1185233&locale=en_US&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&department=136&trackingid=64354799847524800 HTTP/1.1" 200 0 349377 "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" "10.119.25.155:53702" "10.119.80.152:61026" x_forwarded_for:"174.203.39.239, 23.64.120.177, 10.119.155.137, 10.119.145.11,174.203.39.239,10.119.112.37, 10.119.25.155" x_forwarded_proto:"https" vcap_request_id:"d1628805-0307-4bf7-7d8d-b1fa3a829986" response_time:1.211096 gorouter_time:0.000257 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"180" instance_id:"8faf9328-b05d-4618-7d12-96e6" x_cf_routererror:"-" x_b3_traceid:"06880ee3e5ad85b36dd3f4e64337a842" x_b3_spanid:"acb1620e517eebec" x_b3_parentspanid:"6dd3f4e64337a842" b3:"06880ee3e5ad85b36dd3f4e64337a842-6dd3f4e64337a842" msg: abc.com - [2023-11-24T18:38:26.916331792Z] "GET /products/?cid=1127944&department=75&market=us&locale=en_US&pageNumber=1&pageSize=60&trackingid=6936C9BF-D9DD-4D77-A14F-099C0400345D&brand=lll HTTP/1.1" 200 0 48615 "-" "browse" "10.119.25.172:51116" "10.119.80.139:61034" x_forwarded_for:"10.119.80.195, 10.119.25.172" x_forwarded_proto:"https" vcap_request_id:"a3125da7-a602-4e17-6656-909f380c12ed" response_time:0.068075 gorouter_time:0.000737 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"156" instance_id:"4f44c63e-44c6-4605-7466-fe5d" x_cf_routererror:"-" x_b3_traceid:"731b434ec32bb0eb6236fd4a8b8e1195" x_b3_spanid:"6236fd4a8b8e1195" x_b3_parentspanid:"-" b3:"731b434ec32bb0eb6236fd4a8b8e1195-6236fd4a8b8e1195" Iam trying to extract values brand market and cid from above url with below query  index = dd | rex field=_raw "brand=(?<brand>[^&]+)" | rex field=_raw "market=(?<market>[^&]+)" | rex field=_raw "cid=(?<cid>\d+)" | table brand, market, cid but I get the whole url after brand= getting extracted, not just brand market and cid values. Please help
I has read about this little bit more and to be honest I couldn't found a clear answer and any reason why this has worked like that way.  App/user configuration files said that app.conf is for user/... See more...
I has read about this little bit more and to be honest I couldn't found a clear answer and any reason why this has worked like that way.  App/user configuration files said that app.conf is for user/app level only. You cannot use it for global configurations, BUT still the instructions said that you should put it into etc/system/local/app.conf to use it as a global (Set the deployer push mode). This is quite confusing! And actually that file is on deployer on path etc/shcluster/apps/<app> not in etc/apps/<app> which basically means that it's hasn't merged(/affected) with other app files when bundle has applied  (read: created) on deployer. Precedence has used only for files under etc/apps/<app> + etc/system if I have understood right. Usually when you have created your own app you set all configurations into default not a local directory. This should haven't have any other side effect, than where it has put, when bundle has applied into SHC members. Of course also e.g. pass4SymmKeys etc. is crypted (and plain text has removed) only on those files, which are in local! If you have some apps e.g. from splunkbase, then you should put your local changes under local directory, avoid to lost those, when you update that app to the newer version. But it shouldn't depend that way based on where app.conf is default vs. local. If this has some side effects then it should mentioned on docs. I haven't seen any mention that default vs local has used for setting global vs. local values. It's only the precedence which those are used. Definitely this needs some feedback to doc team. BTW: @pmerlin1 you said that you have migrated from SH to SHC. Have you followed the instructions and use only clean and new SH as a members on this SHC not reuse (without cleaning) the old SH?
Hello, I tried with the option. But no luck.  Not seeing any option called "Akamai​ Security Incident Event Manager API" under data inputs of settings.