All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi wonderful people. I wanted to know if we can combine two services in splunk to get an output  | rest /services/authentication/users splunk_server=local  and    | rest /services/admin/SAML-g... See more...
Hi wonderful people. I wanted to know if we can combine two services in splunk to get an output  | rest /services/authentication/users splunk_server=local  and    | rest /services/admin/SAML-groups splunk_server=local   how can I combine the above two to get the results in one query   
Hi fellow Splunkers, Good day. We are noticing that applications in our Splunk Cloud Platform is not sorted by App Label unlike our Splunk Enterprise platform? Would anyone be able to suggest if ... See more...
Hi fellow Splunkers, Good day. We are noticing that applications in our Splunk Cloud Platform is not sorted by App Label unlike our Splunk Enterprise platform? Would anyone be able to suggest if there is a way to sort the apps list dropdown in Splunk Cloud?   Thanks a lot in advance. Kind Regards!
I want to add an annotation to a dashboard every time we switch from blue servers to green servers or green to blue.  There is no event for this, but I can calculate the active color by comparing the... See more...
I want to add an annotation to a dashboard every time we switch from blue servers to green servers or green to blue.  There is no event for this, but I can calculate the active color by comparing the count of each type of server.  If I look two minutes ago and compare it to one minute ago I can see if the active color changed.  So if two minutes ago there were more blue servers than green servers, but now there are more green than blue I know the active color changed.       This query will show a transition if I give it two time frames (two minutes ago compared to one minute ago).  It works, but I want the query to show me all color transitions over a specific time period, such as 24 hours.           index=... earliest=-3m latest=-2m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount | eval activePreviously=if(BlueCount > GreenCount, "BLUE", "GREEN") | fields activePreviously | join [search index=... earliest=-2m latest=-1m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount | eval activeNow=if(BlueCount > GreenCount, "BLUE", "GREEN") | fields activeNow] | eval transition=if(activePreviously=activeNow, "no", "yes") | where transition="yes" | table transition activeNow activePreviously   This search will show me the active color in 2 minute period periods over a given time frame.      Index=... | bin _time span=2m | stats count(eval(match(colortag,"Blue"))) as BlueCount, count(eval(match(colortag,"Green"))) as GreenCount by _time | eval active=if(BlueCount > GreenCount, "BLUE", "GREEN")     This is what I see _time                                       BlueCount           GreenCount          active 2022-11-15 11:15:00      1561                      143                           BLUE 2022-11-15 11:16:00      1506                      140                           BLUE 2022-11-15 11:17:00      1627                      154                           BLUE 2022-11-15 11:18:00      1542                      148                           BLUE 2022-11-15 11:19:00      1199                      553                           BLUE 2022-11-15 11:20:00        255                    1584                           GREEN 2022-11-15 11:21:00             3                     1721                          GREEN 2022-11-15 11:22:00             0                     1733                          GREEN 2022-11-15 11:23:00             0                     1780                          GREEN 2022-11-15 11:24:00             0                     1802                          GREEN I want to add a field that indicates if the color changed from the previous _time.  I will then only show (annotate) the time and color where change=yes. _time                                       BlueCount           GreenCount          active             change 2022-11-15 11:15:00      1561                      143                           BLUE                 N/A  2022-11-15 11:16:00      1506                      140                           BLUE                 No 2022-11-15 11:17:00      1627                      154                           BLUE                 No 2022-11-15 11:18:00      1542                      148                           BLUE                 No 2022-11-15 11:19:00      1199                      553                           BLUE                 No 2022-11-15 11:20:00        255                    1584                           GREEN             Yes 2022-11-15 11:21:00             3                     1721                           GREEN             No 2022-11-15 11:22:00             0                     1733                           GREEN             No 2022-11-15 11:23:00             0                     1780                           GREEN             No 2022-11-15 11:24:00             0                     1802                           GREEN             No I can't see how to reference the previous active color from the current bin/bucket.  That is probably not the way to do it, but that is where I go to before asking for help.     In short, I want to annotate whenever the count of two fields changes so that one is now larger than the other one and show the name of the larger field.   Thanks.  
I have a question when running the following question in search and reporting for the latest DAT version and AMCORE  " index=* source type=mcafee:epo product="McAfee Endpoint Security", engine versi... See more...
I have a question when running the following question in search and reporting for the latest DAT version and AMCORE  " index=* source type=mcafee:epo product="McAfee Endpoint Security", engine version>="6300.9594", dat version>="5051.0"  I noticed the out put from Splunk doesn't marry up to the information in McAfee e-Policy. Is there a way to  sync both McAfee e-Policy and Splunk , so that way when looking for AMCORE and DAT files they both display the same version. Is the McAfee Add-on for Splunk configured to show McAfee ENS data ?
 table 1: _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 5 2022-10-13 10:00 CMD www.asco.com 2... See more...
 table 1: _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 5 2022-10-13 10:00 CMD www.asco.com 200 30 10 2022-10-25 15:00 KMD www.hyg.com 300 40 12 2022-11-01 18:00 KMD www.sts.com 400 50 18   table2: Presently I have table 1 but the requirement is  last column called "impacted_allocations" should get added up and display the added number for example: For allocation we have 2 rows as "CMD" so for  CMD -impacted_allocation values should get added up(5+10=15) and show as 15 in the table. the same goes with the KMD(12+18=30) 30 should display in my table as shown below. _time allocation website quantity failed impacted_allocations 2022-10-12 09:00 CMD www.asd.com 100 20 15 2022-10-13 10:00 CMD www.asco.com 200 30   2022-10-25 15:00 KMD www.hyg.com 300 40 30 2022-11-01 18:00 KMD www.sts.com 400 50  
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However,... See more...
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, people get distracted when they place mouse upon the icon and the export and Full screen icons pump up. Is there a way to disable this default unneeded functionality so nothings pumps up on mouse hovering over an icon ? Many thanks Joan  
I have created one alert, it is working fine and I am receiving email, but I am not receiving the autocut incident to service now. How to make this work???
Hello,   I'm having issues when trying to start the SSE Connector Configuration from the SecureX Relay Module, I'm running a Windows 10 VM Splunk version 9.0.2 App version has been tested with 1... See more...
Hello,   I'm having issues when trying to start the SSE Connector Configuration from the SecureX Relay Module, I'm running a Windows 10 VM Splunk version 9.0.2 App version has been tested with 1.0.0 and 1.1.1 same issue persists Following logs have been observed in the splunkd.log 11-14-2022 19:47:47.394 -0600 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 95, in run\n self._run_command(*args)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 67, in _run_command\n stdin=DEVNULL\n File "C:\Program Files\Splunk\Python-3.7\lib\subprocess.py", line 800, in __init__\n restore_signals, start_new_session)\n File "C:\Program Files\Splunk\Python-3.7\lib\subprocess.py", line 1207, in _execute_child\n startupinfo)\nFileNotFoundError: [WinError 2] The system cannot find the file specified\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 39, in start_connector\n client.run_action()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\sse_connector_client.py", line 41, in run_action\n supported_commands[self.action]()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\sse_connector_client.py", line 51, in start\n self.shell.run_connector()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 161, in run_connector\n self.shell.run()\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\shell_client.py", line 97, in run\n raise ShellError(str(err))\nerrors.ShellError: [WinError 2] The system cannot find the file specified\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 151, in init\n hand.execute(info)\n File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\admin.py", line 638, in execute\n if self.requestedAction == ACTION_EDIT: self.handleEdit(confInfo)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\relay_module\aob_py3\splunktaucclib\rest_handler\admin_external.py", line 39, in wrapper\n result = meth(self, confInfo)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\relay_module\aob_py3\splunktaucclib\rest_handler\admin_external.py", line 129, in handleEdit\n self.payload,\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 62, in update\n self.start_connector(data)\n File "C:\Program Files\Splunk\etc\apps\relay-module\bin\connect_handler.py", line 43, in start_connector\n err.message\nsplunktaucclib.rest_handler.error.RestError: REST Error [500]: Internal Server Error -- [WinError 2] The system cannot find the file specified\n   Something I have noticed is that while comparing the log files from a Linux instance vs this Windows instance, I don't see a log file for the Relay-module, not sure if this is related    
How difficult is it to make the EventID an index field for the wineventlog index? Can it increase indexing time significantly? 
I have a start time column in splunk in this format: 19:10:54:19 I have a start date column in this format: 2022-11-15 I also have a time zone column in this format: -500 How can I get a new co... See more...
I have a start time column in splunk in this format: 19:10:54:19 I have a start date column in this format: 2022-11-15 I also have a time zone column in this format: -500 How can I get a new column with time rounded up to the next hour, in GMT. Example of output: Mon Mar 21 18:00:00 GMT 2022 Thanks!
This is only error in logs when i tried to check: javax.xml.stream.XMLStreamException: No element was found to write: It start coming after i restarted splunk service on heavy forwarder but after s... See more...
This is only error in logs when i tried to check: javax.xml.stream.XMLStreamException: No element was found to write: It start coming after i restarted splunk service on heavy forwarder but after sometimes again same issue. What could be the cause?   Thanks Shilpi
Hi, How will search head know which index has data? It's an interview question. Kindly help me. Regards Suman P.
Been having trouble with my indexers but everything is fine now and up. But now my RF and SF are still not been met.  I try tweaking it but it's not working. I have added a screenshot if anyone can... See more...
Been having trouble with my indexers but everything is fine now and up. But now my RF and SF are still not been met.  I try tweaking it but it's not working. I have added a screenshot if anyone can kindly assist.  Thanks
hello Why doesn't my post process search work when using timechart command?     <search id="cap"> <query> `index_mes` (sourcetype=web_request OR sourcetype=web:request) ... See more...
hello Why doesn't my post process search work when using timechart command?     <search id="cap"> <query> `index_mes` (sourcetype=web_request OR sourcetype=web:request) </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search>       <row> <panel> <chart> <search base="cap"> <query> | timechart span=15m dc(sam) as cap</query>   Thanks
I'm using Splunk to collect data about a collection of logs. A log returned from Splunk might look like: type: user creation transaction_id:1234 message=process started   Now, I want to cou... See more...
I'm using Splunk to collect data about a collection of logs. A log returned from Splunk might look like: type: user creation transaction_id:1234 message=process started   Now, I want to count how many time an error has been linked to a transaction for user_creation, without knowing the transaction in advance. For example, this error might be a log:   type: error transaction_id:1234 message=process abord   I'm trying to use the rex command to isolate the transaction_id from the first log, then pipe it to find an error with the same transaction_id (to get a count of how many time an error has been associated with the process user creation), but my request seems to considered the first part of my request instead of just using the return to pipe to the second request. Here is what I have to far:         type = "user creation" | rex field= (?<transaction_id>[^-]+)"| search transaction_id=field message="process abord" | stats count as total_error_user_creation         Anyone could suggest some improvements to get the desired result?      
Right now I'm using regex to pull data with the phrase "MFA challenge succeeded" using the following regex:   | rex "(?<MFA>[a-z,A-Z,\s,\bcode\b]+)account\s+(?<account>\w+)\s+with\s+email\s+(?<em... See more...
Right now I'm using regex to pull data with the phrase "MFA challenge succeeded" using the following regex:   | rex "(?<MFA>[a-z,A-Z,\s,\bcode\b]+)account\s+(?<account>\w+)\s+with\s+email\s+(?<email>[^ ]+).\s+\w+\s+\w+\s+\w+\s+\w+\s+(?<keycloak_id>[a-z,0-9,-]+)"    from the following field:  message: MFA challenge succeeded for account aaaaaaa. Email is example@example.com. Keycloak session id is 11111111-1111-1111-1111-1111111111111  However in the message field the MFA challenge succeeded will often be different, such as:  MFA challenge issued MFA code issued MFA challenge failed. I need a way to use regex to pull out messages where it says MFA challenge issued, MFA code issued, MFA challenge failed and then display them in a table 
I have some Phantom playbooks performing tasks that I want to monitor on a Splunk dashboard - runs/day, distinct tasks processed per run, success/retry/failure, things like that. I can see the revisi... See more...
I have some Phantom playbooks performing tasks that I want to monitor on a Splunk dashboard - runs/day, distinct tasks processed per run, success/retry/failure, things like that. I can see the revision control check-ins by playbook name, but not the individual playbook runs - those are identified with an integer that corresponds to the playbook ID. The playbook ID changes for each check-in and doesn't appear to be included in the check-in event. Surely someone must have set up a way to track Phantom playbook runs from a Splunk dashboard with a human-readable playbook name  - how should I start going about this?
Hi My json logs comes with two different patterns one with timestamp and host added sometimes and one with out these extra fields , when i dont have extra timestamp and host the extractions work bett... See more...
Hi My json logs comes with two different patterns one with timestamp and host added sometimes and one with out these extra fields , when i dont have extra timestamp and host the extractions work better , but for the events with timestamp and host events are not breaking properly  Type 1 Logs     Component: xxxxx    Data:    Description: xxxx    Message: xxxxx Accessed URL: xxxx    Originator: xxxx    Target: xxxx    appName: xxxxxx    subTarget: XYZ    timeStamp: 1668522719915 Type 2 Logs : Nov 15 15:31:58 ics021013230.ics-eu-1.asml.com {"appName": "XXXXXXX","Component":"XXXXX","timeStamp":"1668522718900","eventId":"2e0525","Description":"XXXX Gateway: YYYYY ","Originator":"xxxxxx","Target":xxxxx","subTarget":"xxxxx"
Hello,  We have been using this query to list out hosts that are not sending logs since past 24h.  It has been working well and for some unknown reason it has now suddenly stopped working.  In the se... See more...
Hello,  We have been using this query to list out hosts that are not sending logs since past 24h.  It has been working well and for some unknown reason it has now suddenly stopped working.  In the sense it does not show any results despite there r hosts that meet the condition.  Can someone pls help to figure out why ?       | tstats max(_time) as lastSeen_epoch WHERE index=linux [| inputlookup linux_servers | table host ] by host | where lastSeen_epoch<relative_time(now(),"-24H") | eval LastSeen=strftime(lastSeen_epoch,"%m/%d/%y %H:%M:%S") | fields host LastSeen       Our lookupfile has 700 hosts .  Now if i reverse the where condition (just for testing) as shown below ,       where lastSeen_epoch > relative_time(now(),"-24H")       it shows 694 results  meaning there are 6 hosts (700-694)  that are not logging.   So why is the original query not display the 6 hosts ?   Thanks  
Hello all, I am getting an continuous error as the rule has a malformed related_searches definition. i have checked the lookup file as well and everything found normal but i am still getting the er... See more...
Hello all, I am getting an continuous error as the rule has a malformed related_searches definition. i have checked the lookup file as well and everything found normal but i am still getting the error. Is there any inconsistency in the query. The below is the query is used for alerting.   index=wineventlog source="*WinEventLog:Security" EventCode=4688 [ | inputlookup tools.csv WHERE discovery_or_attack=attack | stats values(filename) as search ] | transaction host maxpause=5m | where eventcount>=4 | fields _raw closed_txn field_match_sum linecount