All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So what happened to parsetest? $ splunk cmd parsetest couldn't run "/opt/splunk/bin/parsetest": No such file or directory $ splunk --version Splunk 9.0.1 (build 82c987350fde) $ splunk help cmd | g... See more...
So what happened to parsetest? $ splunk cmd parsetest couldn't run "/opt/splunk/bin/parsetest": No such file or directory $ splunk --version Splunk 9.0.1 (build 82c987350fde) $ splunk help cmd | grep parsetest parsetest Validates parsing rules for a single event. Still documented in latest docs https://docs.splunk.com/Documentation/Splunk/9.0.3/Troubleshooting/CommandlinetoolsforusewithSupport#parsetest Looking through the manifest of older Splunk versions and parsetest is not found since splunk-7.1.9-45b25e1f9be3. I would like it back
Hello! I have many events, and I have a search that returns only the events that contain the to field.     index="my_index_qa" sourcetype="example-qa" to=*       The results are a l... See more...
Hello! I have many events, and I have a search that returns only the events that contain the to field.     index="my_index_qa" sourcetype="example-qa" to=*       The results are a list of events that have the following pattern:     db271cf8678c -2023-01-12 15:08:32.157 [app=app-name, traceId=traceid-value, spanId=spanid-value, INFO 1 [llEventLoop-5-5] filter.FilterBeingUsed : c=class, m=method, method=GET, to=http://example.url.com/path/extra, route=https://example.url.com/redirect/route, headers={X-Forwarded-For=[IPADDRESS, IPADDRESS2], X-Forwarded-Proto=[http], X-Forwarded-Port=[80], Host=[EXAMPLE-HOST], app-device=[DEVICE-INFO], app-user=[devicce-info-os-info], app-os=[APP-OS-VERSION], user-agent=[user-agent-example], app-version=[app.version.example], Origin=[origin-app]}       I want to be able to group by the to= values, so I can count the number of times they repeat, create charts and do some other metrics. Is it possible? How can I do this? Thank you for any help in advance. And sorry if I wrote anything wrong, english is not my main language.  
Hello Good people, I am pretty new to the splunk community. I have inherited a splunk enterprise application. The Splunk server on weekdays is being forwarded around 45GB of data, daily, mon-fri. On... See more...
Hello Good people, I am pretty new to the splunk community. I have inherited a splunk enterprise application. The Splunk server on weekdays is being forwarded around 45GB of data, daily, mon-fri. On the weekends it increases to 80GB per day, which seems to be odd since no one is on the network on the weekends. Where would I begin to look at configuration files in order to decrease the amount of data being sent? Also, our license limit is only 15GB.
Hello All, I have following lines in the log file -   Server8 runiyal 2023-01-12 09:48:41,880 INFO Plugin.DOCUMENT Bytes size from input stream : 2072823 server8 runiyal 2023-01-12 09:48:41,978... See more...
Hello All, I have following lines in the log file -   Server8 runiyal 2023-01-12 09:48:41,880 INFO Plugin.DOCUMENT Bytes size from input stream : 2072823 server8 runiyal 2023-01-12 09:48:41,978 INFO Plugin.DOCUMENT File size after upload to temp folder: 2072823 server8 runiyal 2023-01-12 09:48:43,391 SUCCESS Plugin.DOCUMENT File size after notifying the docrepo : 2072823   I want to - 1. Search for the DocID in the end <2072823>; It should have SUCCESS written in line. (Line3) 2. It should then look at the above line with string "from input stream" for the same DocID (Line 1) 3. Reduce the timestamp from SUCCESS line (3) to the timestamp in line with the text "from input stream" (Line 1) - Result will be in seconds 4. Result should be in two columns: "DocID" and "Time Taken" (4) Will appreciate your inputs on how this can be achieved. Thanks!
so I am trying to do something super simple having watched a basic YouTube video on how to do it, I just want to add a new menu item to the top navigation bar, something like "Test" In the Web UI I... See more...
so I am trying to do something super simple having watched a basic YouTube video on how to do it, I just want to add a new menu item to the top navigation bar, something like "Test" In the Web UI I go to Settings \ User interface \ Navigation menus and then click on the default Nav name in the list of my custom app for example   <nav search_view="search"> <view name="ess_home" default="true" /> <view name="ess_security_posture" /> <view name="incident_review" /> <view name="ess_investigation_list" /> <view name="my_new_test_item" /> </nav>   I save it and get the Successfully updated "default" in SplunkEnterpriseSecuritySuite. confirmation but nothing happens, I never see my new item up top   
We are trying to troubleshoot some memory consumption issues with one of the SH cluster nodes. We found that this instance shows high concurrency of scheduled reports 46/15 historical while the ot... See more...
We are trying to troubleshoot some memory consumption issues with one of the SH cluster nodes. We found that this instance shows high concurrency of scheduled reports 46/15 historical while the other nodes are way below this number. Also in the running historical scheduled reports panel we got a column Mode that shows "historical" as value. What does a "historical" report mean in this context? The Splunk documentation for DMC doesnt explain it. https://docs.splunk.com/Documentation/Splunk/9.0.3/DMC/Scheduleractivity Regards.
Hi all, I'm currently using Splunk Cloud and my focus is to display status icons as values based on the search results in my classic dashboard table. I found a way to display only the icon with html... See more...
Hi all, I'm currently using Splunk Cloud and my focus is to display status icons as values based on the search results in my classic dashboard table. I found a way to display only the icon with html but at the same time im struggling to assign the icon based on the result from the search query (if result active, pass icon check-circle else default pass icon warning/error etc.). How can I achieve this ? Any tweaks from this code attached will be much appreciated! <row> <html> <div> <td class="icon-inline numeric"> <i>Range icon: </i> <i class="icon-check-circle" style="color: green"><var>low</var></i> <i class="icon-alert" style="color: orange">warning</i> <i class="icon-alert-circle" style="color: red">error</i> </td> </div> </html> </row> <row> <panel> <table id="t1"> <search> <query>index=XXX host=* | eval host=upper(host) | stats count BY host | eval count=1, host=upper(host) | fields host count | stats sum(count) AS total BY host|rangemap field=total low=1-10 default=severe</query> <earliest>-30m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row>  
I have a search that outputs a table like below            user  |  host  |  app -------------------------------------         user1 | host1 | app1   I want to add a new field that that finds t... See more...
I have a search that outputs a table like below            user  |  host  |  app -------------------------------------         user1 | host1 | app1   I want to add a new field that that finds the Department of the user from another search. So it would look like this         dep  |  user  |  host  |  app -------------------------------------     dep1 |  user1 | host1 | app1     The second search will have something like this in it so i don't think a join would be sufficient    where match(search,"something\s+user")  
Hi, How to remove the null field values from the  Palo cortex data lake stream logs to reduce the splunk volume ingestion..   Thanks 
  I have a SPL search that returns a field with multiple values (names of lookups). I want to concat the lookup name and it's origin app, found with the search below:       | rest splunk_server=loc... See more...
  I have a SPL search that returns a field with multiple values (names of lookups). I want to concat the lookup name and it's origin app, found with the search below:       | rest splunk_server=local /servicesNS/-/-/saved/searches | table title eai:acl.app eai:acl.owner search | where match(search,"outputlookup\s+lookupname")  
Hi! I want to know if is possible to get duplicated ingestion of logs between Splunk Enterprise and Splunk enterprise security,  also the availability of the logs of Splunk enterprise in searches m... See more...
Hi! I want to know if is possible to get duplicated ingestion of logs between Splunk Enterprise and Splunk enterprise security,  also the availability of the logs of Splunk enterprise in searches made on Splunk Enterprise security. and in general how this work on an indexer level.
I am having trouble expressing multiple average windows in a table form.  My table shows the same values for myval, five_min_val, fifteen_min_val for each host.  I can get some of what I want from ti... See more...
I am having trouble expressing multiple average windows in a table form.  My table shows the same values for myval, five_min_val, fifteen_min_val for each host.  I can get some of what I want from timechart and trellis layout on each of the aggregations for a single host, but I really would like to look at the data across hundreds of hosts, where the value is above some threshold over 15 minutes.  I tried trendline and sma5, sma15 to represent the 5 min and 15 min simple moving averages with similar effect. Please enlighten me?     <base search> | fields _time host myval | bins span=1m _time | streamstats window=5 avg(myval) as five_min_val by host | streamstats window=15 avg(myval) as fifteen_min_val by host | stats latest(myval) as myval latest(five_min_val) as five_min_val latest(fifteen_min_val) as fifteen_min_val by host | table host myval five_min_val fifteen_min_val    
Hello Splunkers,   I am currently facing a problem and can't find any documentation. Let me explain, we are using the Splunk_TA_o365 mostly for sign-in logs. The issues is that any of this logs... See more...
Hello Splunkers,   I am currently facing a problem and can't find any documentation. Let me explain, we are using the Splunk_TA_o365 mostly for sign-in logs. The issues is that any of this logs have the right timestamp.  For these sign in logs, the timestamp is stored in the "createdDateTime" field, and not in the "timestamp" field like other events. So I tried to "fix" it with the local/props.conf with the stanza :    [o365:graph:api] TIME_PREFIX = ("createdDateTime":\s*")|timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S KV_MODE = json TZ = UTC     And it didn't work at all, but when I tried (and I know it is REALLY not recommended in the best practice) to write the same stanza in the default/props.conf, it surprizingly worked. So I was wondering if it was a normal behavior (which I'd find strange), or if there is another solution that could be more sustainable than modifying the default folder. Thanks in advance for your time, Best regards and Happy splunking! 
I would like to fit an ARIMA model to my data with a search something like this: <base search> | timechart span=5m avg(value) as value by some_field The problem here is that, the number of field... See more...
I would like to fit an ARIMA model to my data with a search something like this: <base search> | timechart span=5m avg(value) as value by some_field The problem here is that, the number of field that returns by this search is dynamic, so it can return 5 fields one day but it could also return 3 or 7 the other day for instance. I would like to fit an ARIMA model to all the fields that is returned by that search. What I found was the foreach command where you iterate over fields : | foreach * [eval '<<FIELD>>' = ... ]   However, when I try to use the fit command instead of eval, I get an error message saying:  Error in 'foreach' command: Search pipeline may not contain non-streaming commands Since foreach cannot contain non-streaming commands.   Is there a way to come around this issue?
Hi,    I want to know how to differentiate between logs from a productive versus a non-productive license.   Thnk you    Matilda
Hi all, Recently I have been working on getting a query that can help me identify the execution of malicious documents which make use of "T1036.002: Masquerading (Right-to-Left Override)".  "Adv... See more...
Hi all, Recently I have been working on getting a query that can help me identify the execution of malicious documents which make use of "T1036.002: Masquerading (Right-to-Left Override)".  "Adversaries may manipulate features of an artifact to mask its true intentions/make it seem legitimate. One technique that could be employed to achieve this is right-to-left character override (RTLO). RTLO is a non-printing Unicode character that causes the text that follows to be displayed in reverse. Detection of this technique involves monitoring filenames for commonly used RTLO character formats such as \u202E, [U+202E], and %E2%80%AE." My current query does not work and simply shows all file names from the Image field: index=* | eval file_name=replace(Image,"(.*\\\)","") | rex field=file_name "(?i)(?<hex_field>202e)" | search NOT (hex_field="") | dedup file_name | table file_name, hex_field, Image   Image Field: C:\Users\Administrator.BARTERTOWNGROUP\Desktop\‮cod.3aka3.scr Note here that the rcs.3aka3.doc is RTL not LTR. Does anyone have any idea how to achieve such filtering?
Hello everyone, I am trying to configure Splunk DB connect app, and getting next one error in logs: 2023-01-12T14:46:28+0300 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most ... See more...
Hello everyone, I am trying to configure Splunk DB connect app, and getting next one error in logs: 2023-01-12T14:46:28+0300 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 220, in validate_java_home raise Exception(reason) Exception: PermissionError: [Errno 13] Permission denied: '/usr/java/jdk1.8.0_311-amd64/bin/java' validate java command: /usr/java/jdk1.8.0_311-amd64/bin/java.   Folders /usr/java/jdk1.8.0_311-amd64 has owner splunk -  drwxr-xr-x. 9 splunk splunk 107 Jan 12 12:54 jdk1.8.0_311-amd64 Folder /opt/splunk/etc/apps/splunk_app_db_connect has owner splunk too - drwx------. 18 root root 4096 Jan 12 13:44 splunk_app_db_connect   My splunk is working from user splunk grep SPLUNK_OS_USER= /opt/splunkforwarder/etc/splunk-launch.conf SPLUNK_OS_USER=splunk   What is the reason, can anyone help?
Hi at all, I found an issue in iplocation database on Splunk Cloud: if i use iplocation for many IP address (e.g. 147.161.244.186) I find Sao Paulo (Brazil) but using whois I find Amsterdam (NL) tha... See more...
Hi at all, I found an issue in iplocation database on Splunk Cloud: if i use iplocation for many IP address (e.g. 147.161.244.186) I find Sao Paulo (Brazil) but using whois I find Amsterdam (NL) that's the correct answer. It could be a bug or an update problem, has anyone  experienced this issue? and is there  anyone that knows how to update iplocation database on Splunk Cloud: it shouldn't be possible. How can I intervene? Thank you for your support. Ciao. Giuseppe
Upgrade Readiness App added to the Splunk Cloud Platform shows the following two errors. 1. Search peer SSL config check 2. MongoDB TLS and DNS validation check The following knowledge says ... See more...
Upgrade Readiness App added to the Splunk Cloud Platform shows the following two errors. 1. Search peer SSL config check 2. MongoDB TLS and DNS validation check The following knowledge says that "server.conf" needs to be configured, but I think that is not possible with Splunk Cloud. https://community.splunk.com/t5/Security/Search-peer-SSL-config-check-How-to-resolve-these-errors-that/m-p/618002#M16425   Can these errors be ignored? Splunk Cloud Platform version is "Version:9.0.2209.1" Regard,
Is there a possibility to define a standard set of health rules similar to the ones that are auto created when you create a new application? Scenario: When a new application reports to AppDynamics,... See more...
Is there a possibility to define a standard set of health rules similar to the ones that are auto created when you create a new application? Scenario: When a new application reports to AppDynamics, a standard set of health rules with custom names are applied to these applications. I understand we could create these using the API but it would be a post identification of a new application that it would be created. The default rules seem to be picked up somewhere from the default controller configuration.