All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, after upgrade to Splunk 9.0.2 the UFMA started to use https://[::1]:8089/services/deployment/server/clients?count=0 for REST API call which resulted into 400 Host header contains invalid charact... See more...
Hi, after upgrade to Splunk 9.0.2 the UFMA started to use https://[::1]:8089/services/deployment/server/clients?count=0 for REST API call which resulted into 400 Host header contains invalid characters. I was able to 'fix' this by adding connectUsingIpVersion = 4-first to [general] section in server.conf. What would be the correct way?
raw event {... "jvm_cmd":"bin/java -Dp -Dp1=v1-Dp2=v2 -Dq -Dp3=v3 ..."} How to extract, kv pair from jvm_cmd value & print those in Splunk search? I am not admin. So, I can't change props.conf ... See more...
raw event {... "jvm_cmd":"bin/java -Dp -Dp1=v1-Dp2=v2 -Dq -Dp3=v3 ..."} How to extract, kv pair from jvm_cmd value & print those in Splunk search? I am not admin. So, I can't change props.conf or transform.conf. I tried https://community.splunk.com/t5/Splunk-Search/Using-KV-MODE-auto-in-props-conf-how-do-I-get-a-search-time/m-p/240834 and rex without any success. Any help will be much appreciated    
I have the following table of activities: Internal External Direction 1.1.1.1 2.2.2.2 Outbound 3.3.3.3 4.4.4.4 Inbound 5.5.5.5 4.4.4.4 Inbound 1.1.... See more...
I have the following table of activities: Internal External Direction 1.1.1.1 2.2.2.2 Outbound 3.3.3.3 4.4.4.4 Inbound 5.5.5.5 4.4.4.4 Inbound 1.1.1.1 8.8.8.8 Outbound   I want to group them by either Internal OR External, based on what is in the Direction field, if its Outbound I want to group by Internal , if its Inbound I want to group by External, and get the count. I would like to get the following table as a result: Internal External Count Grouped by Direction 1.1.1.1 2.2.2.2 8.8.8.8 2 1.1.1.1 Outbound 3.3.3.3 5.5.5.5 4.4.4.4 2 4.4.4.4 Inbound Thanks.
Let's say we have couple of fields in our dataset (called my_dataset) : event_time, event_type, user, field1 and field2. Now, we want to make a search that: distinct count of field1>X OR distinct c... See more...
Let's say we have couple of fields in our dataset (called my_dataset) : event_time, event_type, user, field1 and field2. Now, we want to make a search that: distinct count of field1>X OR distinct count of field2>Y happen within Z minutes from when a specific event_type (let's call that value type1) happens for the first time. In other words, this search counts number of different field1 or field2 unique values within Z minutes from first type1 (but it searches all event_type values when counting field1 and field2"). I tried: | tstats ... from datamodel=my_dataset groupby _time | eval detection_time_end=strftime((relative_time(event_time,"+`Z`")), "%F %T.%Q"), only_type1=if((event_type="type1"),1,null) | stats earliest(event_time) as earliest_time, earliest(detection_time_end) as end_of_detection_time, dc(field1) as number_of_different_field1_events, dc(field2) as number_of_different_field2_events, by user, only_type1 This only takes me so far and I'm not sure what to do next. I get statistics of earliest time and end of detection time of type1 per user with total distinct counts of field1 and field2 events. I guess I have to use subsearch here? Any help is appreciated here since I got really stuck with this one. Thanks!
Hi, I have an application that is 2 tiers and all tiers are hosted on 2 nodes. We need to monitor the application's uptime and downtime. If the last 7 days the application is down in one day we n... See more...
Hi, I have an application that is 2 tiers and all tiers are hosted on 2 nodes. We need to monitor the application's uptime and downtime. If the last 7 days the application is down in one day we need a percentage that says that the application is up 6 days and down 1 day  Is it possible to do this? ^ Post edited by @Ryan.Paredez for formatting 
hi team, 1. I have a query with below 2 columns returned only PQ, ACT pq1, act1 PQ1, act2 pQ1, act3 pq2, act4 QP2, act5 Pq2, act6 pq3, act7 Pq3, act8 pq_3, act9 ...   2. Then I have... See more...
hi team, 1. I have a query with below 2 columns returned only PQ, ACT pq1, act1 PQ1, act2 pQ1, act3 pq2, act4 QP2, act5 Pq2, act6 pq3, act7 Pq3, act8 pq_3, act9 ...   2. Then I have a standard pq list csv file uploaded in splunk. In the csv file, there is a clumn called 'pq' with a standard pq values defined. please check below sample. PQ pq1 pq2 pq3 pq4 pq5 pq6 ...   3. I want to compare the pq values in the splunk query result with the lookup csv file to find out       a) return the  PQ  and ACT that the PQ value is not exactly matched with the one defined in lookup file, include case sensitive issue.      b) return the PQ and ACT that the PQ value is in the query result but not in the lookup table      c) return the PQ and ACT that PQ value is not in the query but in the lookup table.   How to compose the query to meet the 3 requirements in step3?   BestRegards!   
Hi, What are the limitations on subsearch? Please give one or two, please? This is an interview question. Regards Suman P.
Hi All,  we have few dashboards which are using summary indexes to populate data. Few users reported that they are unable to see any values when they access respective dashboards (Issue is reproduci... See more...
Hi All,  we have few dashboards which are using summary indexes to populate data. Few users reported that they are unable to see any values when they access respective dashboards (Issue is reproducible as well).  However, when I logged in admin user the dashboards are just working fine & values are up to date. I have validated the roles assigned (authorize.conf) and it seems good and have access to summary indexes.  [role_example_user] srchIndexesAllowed = example_index;example_index2;summary_index1;summary_index2 srchMaxTime = 144000 importRoles = default_user Also, validated default.meta configs and respective role has read access to the saved searches, views etc. [savedsearches/summary_index1] access = read : [ admin, role_example_user ], write : [ ] export = none owner = test_user Still users with respective roles can't see anything on dashboards.  Please let me know how I can fix this issue. 
Hi All, these are the logger info counts which are generated in splunk  Total numner where inds-a 20 Total numner where inds-b 30 Total numner where inds-c 40 Total numner where inds-d 50 ... See more...
Hi All, these are the logger info counts which are generated in splunk  Total numner where inds-a 20 Total numner where inds-b 30 Total numner where inds-c 40 Total numner where inds-d 50   i need to create a alert based on inds-c percentage if inds-c is greater than 10% it should create a alert below is the search query i am trying but it has some issue with the rex part ,any suggestions index=abc log_severity=INFO OR WARN appname=doc country=ind earlies=@d |rex "Total Number where inds-c (?<counts>\d+)" |rex "Total Number where inds-* (?<Allcounts>\d+)" eval percentage=((counts/Allcounts)*100) where percentage>=10
Hello everyone, I have noticed that some users in our Splunk environment are always using base searches and Post-process searches, because they was told that was a good practice to do that. But t... See more...
Hello everyone, I have noticed that some users in our Splunk environment are always using base searches and Post-process searches, because they was told that was a good practice to do that. But there are some cases I have noticed that the use of the base search is not speeding up the dashboard instead spent more time. For example there is a dashboard that uses a base search and they use something like this: Base search index=example sourcetype=testing | fields *   And then at the subsearch I can see that when Splunk uses that best search is doing something weird adding the | fields *  at search for example: Example: index=example sourcetype=testing | fields * | eval Date=strftime(_time, "%m/%d/%Y") | dedup s1, s1 | fields * | search something=tosearch | fields * | eval _time = strptime(Date,"%m/%d/%Y") When the post-search is <query> | eval Date=strftime(_time, "%m/%d/%Y") | dedup s1, s1 | search something=tosearch | eval _time = strptime(Date,"%m/%d/%Y") </query>   So I would like to understand why Splunk does it. And also I would like to know if there are some scenarios where use the base search is not recommended.   Thanks in advantage. Best Regards,
Hi wonderful people. I wanted to know if we can combine two services in splunk to get an output  | rest /services/authentication/users splunk_server=local  and    | rest /services/admin/SAML-g... See more...
Hi wonderful people. I wanted to know if we can combine two services in splunk to get an output  | rest /services/authentication/users splunk_server=local  and    | rest /services/admin/SAML-groups splunk_server=local   how can I combine the above two to get the results in one query   
Hi fellow Splunkers, Good day. We are noticing that applications in our Splunk Cloud Platform is not sorted by App Label unlike our Splunk Enterprise platform? Would anyone be able to suggest if ... See more...
Hi fellow Splunkers, Good day. We are noticing that applications in our Splunk Cloud Platform is not sorted by App Label unlike our Splunk Enterprise platform? Would anyone be able to suggest if there is a way to sort the apps list dropdown in Splunk Cloud?   Thanks a lot in advance. Kind Regards!
I want to add an annotation to a dashboard every time we switch from blue servers to green servers or green to blue.  There is no event for this, but I can calculate the active color by comparing the... See more...
I want to add an annotation to a dashboard every time we switch from blue servers to green servers or green to blue.  There is no event for this, but I can calculate the active color by comparing the count of each type of server.  If I look two minutes ago and compare it to one minute ago I can see if the active color changed.  So if two minutes ago there were more blue servers than green servers, but now there are more green than blue I know the active color changed.       This query will show a transition if I give it two time frames (two minutes ago compared to one minute ago).  It works, but I want the query to show me all color transitions over a specific time period, such as 24 hours.           index=... earliest=-3m latest=-2m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount | eval activePreviously=if(BlueCount > GreenCount, "BLUE", "GREEN") | fields activePreviously | join [search index=... earliest=-2m latest=-1m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount | eval activeNow=if(BlueCount > GreenCount, "BLUE", "GREEN") | fields activeNow] | eval transition=if(activePreviously=activeNow, "no", "yes") | where transition="yes" | table transition activeNow activePreviously   This search will show me the active color in 2 minute period periods over a given time frame.      Index=... | bin _time span=2m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount by _time | eval active=if(BlueCount > GreenCount, "BLUE", "GREEN")     This is what I see _time                                       BlueCount           GreenCount          active 2022-11-15 11:15:00      1561                      143                           BLUE 2022-11-15 11:16:00      1506                      140                           BLUE 2022-11-15 11:17:00      1627                      154                           BLUE 2022-11-15 11:18:00      1542                      148                           BLUE 2022-11-15 11:19:00      1199                      553                           BLUE 2022-11-15 11:20:00        255                    1584                           GREEN 2022-11-15 11:21:00             3                     1721                          GREEN 2022-11-15 11:22:00             0                     1733                          GREEN 2022-11-15 11:23:00             0                     1780                          GREEN 2022-11-15 11:24:00             0                     1802                          GREEN I want to add a field that indicates if the color changed from the previous _time.  I will then only show (annotate) the time and color where change=yes. _time                                       BlueCount           GreenCount          active             change 2022-11-15 11:15:00      1561                      143                           BLUE                 N/A  2022-11-15 11:16:00      1506                      140                           BLUE                 No 2022-11-15 11:17:00      1627                      154                           BLUE                 No 2022-11-15 11:18:00      1542                      148                           BLUE                 No 2022-11-15 11:19:00      1199                      553                           BLUE                 No 2022-11-15 11:20:00        255                    1584                           GREEN             Yes 2022-11-15 11:21:00             3                     1721                           GREEN             No 2022-11-15 11:22:00             0                     1733                           GREEN             No 2022-11-15 11:23:00             0                     1780                           GREEN             No 2022-11-15 11:24:00             0                     1802                           GREEN             No I can't see how to reference the previous active color from the current bin/bucket.  That is probably not the way to do it, but that is where I go to before asking for help.     In short, I want to annotate whenever the count of two fields changes so that one is now larger than the other one and show the name of the larger field.   Thanks.  
I have a question when running the following question in search and reporting for the latest DAT version and AMCORE  " index=* source type=mcafee:epo product="McAfee Endpoint Security", engine versi... See more...
I have a question when running the following question in search and reporting for the latest DAT version and AMCORE  " index=* source type=mcafee:epo product="McAfee Endpoint Security", engine version>="6300.9594", dat version>="5051.0"  I noticed the out put from Splunk doesn't marry up to the information in McAfee e-Policy. Is there a way to  sync both McAfee e-Policy and Splunk , so that way when looking for AMCORE and DAT files they both display the same version. Is the McAfee Add-on for Splunk configured to show McAfee ENS data ?
 table 1: _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 5 2022-10-13 10:00 CMD www.asco.com 2... See more...
 table 1: _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 5 2022-10-13 10:00 CMD www.asco.com 200 30 10 2022-10-25 15:00 KMD www.hyg.com 300 40 12 2022-11-01 18:00 KMD www.sts.com 400 50 18   table2: Presently I have table 1 but the requirement is  last column called "impacted_allocations" should get added up and display the added number for example: For allocation we have 2 rows as "CMD" so for  CMD -impacted_allocation values should get added up(5+10=15) and show as 15 in the table. the same goes with the KMD(12+18=30) 30 should display in my table as shown below. _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 15 2022-10-13 10:00 CMD www.asco.com 200 30   2022-10-25 15:00 KMD www.hyg.com 300 40 30 2022-11-01 18:00 KMD www.sts.com 400 50  
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However,... See more...
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, people get distracted when they place mouse upon the icon and the export and Full screen icons pump up. Is there a way to disable this default unneeded functionality so nothings pumps up on mouse hovering over an icon ? Many thanks Joan  
I have created one alert, it is working fine and I am receiving email, but I am not receiving the autocut incident to service now. How to make this work???
Hello,   I'm having issues when trying to start the SSE Connector Configuration from the SecureX Relay Module, I'm running a Windows 10 VM Splunk version 9.0.2 App version has been tested with 1... See more...
Hello,   I'm having issues when trying to start the SSE Connector Configuration from the SecureX Relay Module, I'm running a Windows 10 VM Splunk version 9.0.2 App version has been tested with 1.0.0 and 1.1.1 same issue persists Following logs have been observed in the splunkd.log 11-14-2022 19:47:47.394 -0600 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 95, in run\n self._run_command(*args)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 67, in _run_command\n stdin=DEVNULL\n File "C:\Program Files\Splunk\Python-3.7\lib\subprocess.py", line 800, in __init__\n restore_signals, start_new_session)\n File "C:\Program Files\Splunk\Python-3.7\lib\subprocess.py", line 1207, in _execute_child\n startupinfo)\nFileNotFoundError: [WinError 2] The system cannot find the file specified\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 39, in start_connector\n client.run_action()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\sse_connector_client.py", line 41, in run_action\n supported_commands[self.action]()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\sse_connector_client.py", line 51, in start\n self.shell.run_connector()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 161, in run_connector\n self.shell.run()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 97, in run\n raise ShellError(str(err))\nerrors.ShellError: [WinError 2] The system cannot find the file specified\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 151, in init\n hand.execute(info)\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 638, in execute\n if self.requestedAction == ACTION_EDIT: self.handleEdit(confInfo)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\relay_module\aob_py3\splunktaucclib\rest_handler\admin_external.py", line 39, in wrapper\n result = meth(self, confInfo)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\relay_module\aob_py3\splunktaucclib\rest_handler\admin_external.py", line 129, in handleEdit\n self.payload,\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 62, in update\n self.start_connector(data)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 43, in start_connector\n err.message\nsplunktaucclib.rest_handler.error.RestError: REST Error [500]: Internal Server Error -- [WinError 2] The system cannot find the file specified\n   Something I have noticed is that while comparing the log files from a Linux instance vs this Windows instance, I don't see a log file for the Relay-module, not sure if this is related    
How difficult is it to make the EventID an index field for the wineventlog index? Can it increase indexing time significantly? 
I have a start time column in splunk in this format: 19:10:54:19 I have a start date column in this format: 2022-11-15 I also have a time zone column in this format: -500 How can I get a new co... See more...
I have a start time column in splunk in this format: 19:10:54:19 I have a start date column in this format: 2022-11-15 I also have a time zone column in this format: -500 How can I get a new column with time rounded up to the next hour, in GMT. Example of output: Mon Mar 21 18:00:00 GMT 2022 Thanks!