All Topics

Top

All Topics

I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matache... See more...
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matached host1 host1 host1-a host1-r host2 host2 host2-a host2-r
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FOR... See more...
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FORMAT = nullQueue props.conf- [sourcetype::risktrac_log] TRANSFORMS-null=setnull I used  REGEX=\[\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}:\d{3}\sEDT]\s+DEBUG\s.* as well but that too doesnt drop DEBUG messages  i just tried DEBUG in regex too, no help, can someone help me here please? sample event-  [10/13/23 03:46:48:551 EDT] DEBUG DocumentCleanup.run 117 : /_documents document cleanup complete. how does REGEX pick the pattern ? i can see both the REGEX are able to match whole event. we cant turn DEBUG off for the application
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this fo... See more...
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this format:   {"text": "Ending run - duration 0:00:00.249782\n", "record": {"elapsed": {"repr": "0:00:00.264696", "seconds": 0.264696}, "exception": null, "extra": {"run_id": "b20xlqbi", "action": "status"}, "file": {"name": "alb-handler.py", "path": "scripts/alb-handler.py"}, "function": "exit_handler", "level": {"icon": "", "name": "INFO", "no": 20}, "line": 79, "message": "Ending run - duration 0:00:00.249782", "module": "alb-handler", "name": "__main__", "process": {"id": 28342, "name": "MainProcess"}, "thread": {"id": 140068303431488, "name": "MainThread"}, "time": {"repr": "2023-10-13 10:09:54.452713-04:00", "timestamp": 1697206194.452713}}}   Long story short, it seems that Splunk is getting confused by the multiple fields in the JSON that look like timestamps. The timestamp that should be used is the very last field in the JSON. I first set up a custom sourcetype that's a clone of the _json sourcetype by manually inputting some of these records via Settings -> Add Data.  Using that tool I was able to get Splunk to recognize the correct timestamp via the following settings:   TIMESTAMP_FIELDS = record.time.timestamp TIME_FORMAT = %s.%6N     When I load the above record by hand via Settings -> Add Data and use my custom sourcetype with the above fields then Splunk shows the _time field is being set properly,  so in this case it's 10/13/23 10:09:54.452 AM. The exact same record, when loaded through the Universal Forwarder, appears to be ignoring the TIMESTAMP_FIELDS parameter. It ends up with a date/time of 10/13/23 12:00:00.249 AM, which indicates that it's trying to extract the date/time from the "text" field at the very beginning of the JSON (the string "duration 0:00:00.249782"). The inputs.conf on the Universal Forwarder is quite simple:   [monitor:///app/appman/logs/combined_log.json] sourcetype = python-loguru index = test disabled = 0     Why is the date/time parsing working properly when I manually load these logs via the UI but not when being imported via the Universal Forwarder?
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configure... See more...
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configured correctly in the props.conf, fields.conf and transforms.conf, but is failing when I attempt to use a conditional statement. My goal is to do something like this in my transforms.conf: [ingest_time_timestamp] INGEST_EVAL = ingest_time_stamp:=if(_time > time(), time(), _time) If _time is in the future, then I want it set to the current time, otherwise I want to leave it alone. Anyone have any ideas?
I want to extract Sample ID field value "Sample ID":"020ab888-a7ce-4e25-z8h8-a658bf21ech9"
Whenever I enable index clustering on what is to be my splunk manager, I go to restart it and it never comes back on.  Disabling index clustering through the cli returns access to the gui and allows ... See more...
Whenever I enable index clustering on what is to be my splunk manager, I go to restart it and it never comes back on.  Disabling index clustering through the cli returns access to the gui and allows splunk to start like normal.  Journalctl returns the following (after trying to start splunk with "systemctl start splunk")... > splunk[9423]: Waiting for web server at https://127.0.0.1:8000 to be available......... > splunk[9423]: WARNING: web interface does not seem to be available! > systemd[1]: splunk.service: control process exited, code=exited status=1 > systemd[1]: Failed to start Splunk Enterprise   Trying to start splunk from the binary returns the following... > Checking http port [8000]: not available > ERROR: http port [8000] - no permision to use address/port combination.  Splunk needs to use this port.   I've reinstalled splunk and rebuilt the VM that splunk is sitting on and neither of these have worked.
Hello all,  I could use some help here with creating a search. Ultimately I would like to know if a user is added to a specific set of security groups what security groups if any were removed from ... See more...
Hello all,  I could use some help here with creating a search. Ultimately I would like to know if a user is added to a specific set of security groups what security groups if any were removed from that same user.  Here is a search for security group removal: index=wineventlog EventCode=4729 EventCodeDescription="A member was removed from a security-enabled global group" Subject_Account_Name=srv_HiveProvSentryNe OR Subject_Account_Name=srv_HiveProvSentry source="WinEventLog:Security" sourcetype=WinEventLog | table member, Group_Name, Subject_Account_Name, _time Here is a search for security group added: index=wineventlog EventCode=4728 EventCodeDescription="A member was Added to a security-enabled global group" Subject_Account_Name=srv_HiveProvSentryNe OR Subject_Account_Name=srv_HiveProvSentry source="WinEventLog:Security" sourcetype=WinEventLog | table member, Group_Name, Subject_Account_Name, _time additional search info: EventCode=4728 Added EventCode=4729 Removed Group_Name - security group Subject_Account_Name - prov sentry member - user security groups, I would like to monitor users being added to: RDSUSers_GRSQCP01 RDSUSers_GROQCP01 RDSUSers_BRSQCP01 RDSUSers_BROQCP01 RDSUSers_VRSQCP01 RDSUSers_VROQCP01 Again I am looking to monitor if a user was added to any of the above 6 security groups were they within a few hours before and ahead of the event removed from any other groups. let me know if I can provide any additional info and as always thank you for the help.
Hi, if we upgrade license to 500GB. What is best practice Hardware architecture (CPU +RAM) based and number of "N" Search Heads, "N" indexers. How much storage per indexer we need if let's say rete... See more...
Hi, if we upgrade license to 500GB. What is best practice Hardware architecture (CPU +RAM) based and number of "N" Search Heads, "N" indexers. How much storage per indexer we need if let's say retention is 30 days and "N" installed indexers. or if at least you can share where is good .pdf for me to read with those answers. Thank you.
My data is coming for 0365 as JSON, I am using SPath to get the required fields after that i want to compare the data with a static list containig roles to be monitored but unforutnaly I am getting t... See more...
My data is coming for 0365 as JSON, I am using SPath to get the required fields after that i want to compare the data with a static list containig roles to be monitored but unforutnaly I am getting the below error Error in 'table' command: Invalid argument: 'role="Authentication Administrator"'   Its not working. PFA the releveant snap
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1... See more...
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1, I have a pie chart, If i click the pie chart It need to take to dashboard 2 which consists of same dropdown "office" as multiselect (full office, half office, non-compliant office),   If in case I'm clicking pie chart of dashboard 1 when office value is full office, half office, if should shows the same in dashboard 2 and in dashboard 2 has some panels, its should the using the value.  I had configured the link already, the problem is if we are adding prefix value as " and postfix " and delimiter , it will pass the same to next dashboard 2 dropdown, so that I didn't get the result of panels in dashboard 2.    I need solution for this? Thanks, Manoj Kumar S
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way... See more...
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way The formula applied is as follows:   Here is what I have done so far: index=rcd statut=OK partenaire=000000000P | eval date_appel=strftime(_time,"%b %y") | dedup nom_ws date_appel partenaire temps_rep_max temps_rep_min temps_rep_moyen nb_appel statut tranche_heure heure_appel_max | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, null()) | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel, null()) | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, null()) | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min, null()) | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, null()) | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max, null()) | eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, null()) | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen, null()) | stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO min(temps_rep_min_OK) as temps_rep_min_OK, min(temps_rep_min_KO) as temps_rep_min_KO max(temps_rep_max_OK) as temps_rep_max_OK, max(temps_rep_max_KO) as temps_rep_max_KO, values(temps_rep_moyen_OK) AS temps_rep_moyen_OK, values(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws, values(date_appel) as date_appel | eval temps_rep_moyen_KO_calcul=sum(temps_rep_moyen_KO*nb_appel_KO)/(nb_appel_KO) | eval temps_rep_moyen_OK_calcul=sum(temps_rep_moyen_OK*nb_appel_OK)/(nb_appel_OK) | fields - tranche_heure_bis , tranche_heure_partenaire | sort 0 tranche_heure |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO     I cannot get the final average_ok time displayed temps_moyen= [(nb_appel_1 * temps_moyen 1)+(nb_appel_2 * temps_moyen 2)+...)/sum of nb_appel . I really need help please. Thank you so much    
Hello community, I have come across the issue when I got identical token generated for SOAR user "REST" that I am using for SIEM-SOAR integration and the same was in the Splunk app for SOAR. When I... See more...
Hello community, I have come across the issue when I got identical token generated for SOAR user "REST" that I am using for SIEM-SOAR integration and the same was in the Splunk app for SOAR. When I run "test connectivity" command on the SOAR Server Configuration, it responded with "Authentication Failed: Invalid token". I have just regenerated the token and everything works like a charm. Have you ever encountered such issue?
Hi, I would like to export a table to csv in Dashboard studio. Unfortunately when I click on export only a png is exported. Any Hint? Thank you  Best regards Marta      
Hi Splunkers,    I'm having the multiselect value that results need pass to a macros,    Can you please help for that?    The need is to pass the multiselect values to token $macros2$, where multi... See more...
Hi Splunkers,    I'm having the multiselect value that results need pass to a macros,    Can you please help for that?    The need is to pass the multiselect values to token $macros2$, where multiselect values is an macros itself, multi select values 1. value 1 2.  value 2 3. value 3 4. All   search: `macros1(`$macros2$`,  now(), -15d@d, *, virus, *, *, *)` Thanks in Advance! Manoj Kumar S
Hi,  i have the below table data where i have timecharted for 1hr time span i want to remove the row which is in red colour as it is coming with different time when compare to other data.  can ... See more...
Hi,  i have the below table data where i have timecharted for 1hr time span i want to remove the row which is in red colour as it is coming with different time when compare to other data.  can i be using outlier command to perform this operation and how i can achieve this requirement. Thank you in advance,  _time B C D E F 2023-10-06 22:00             2023-10-07 22:00             2023-10-08 22:00             2023-10-09 09:00             2023-10-09 22:00             2023-10-10 09:00             2023-10-10 22:00             2023-10-11 22:00            
Hi Team, I am trying to create a topic manually using Confluent Control Center (localhost:9021) and then using Connect-->connect-default-->Connector-->Upload connector config file I am uploading the... See more...
Hi Team, I am trying to create a topic manually using Confluent Control Center (localhost:9021) and then using Connect-->connect-default-->Connector-->Upload connector config file I am uploading the splunk sink properties which already have splunk.hec.token. But still I am getting this error "splun.hec.token" is invalid in Confluent UI(@nd screenshot) in browser. Appreciate If anybody can help here? Please note we are tryinf in Ubuntu OS and Splunk, Confluent, Kafka Connect all the components are in same network in same server.   Splunk Sink properties: name=TestConnector topics=mytopic tasks.max=1 connector.class=com.splunk.kafka.connect.SplunkSinkConnector splunk.hec.token=453a412d-029f-4fcf-a896-8c388241add0 splunk.indexes=Attest splunk.hec.uri=https://localhost:8889 splunk.hec.raw=true splunk.hec.ack.enabled=true splunk.hec.ssl.validate.cert=false splunk.hec.ack.poll.interval=20 splunk.hec.ack.poll.threads=2 splunk.hec.event.timeout=300 splunk.hec.ssl.validate.certs=false    
Is Splunk Universal Forwarder compatible with Amazon Linux?  
How can I remove the "Open in Search" (search magnifying glass) icon/option from a panel in a Dashboard Studio dashboard? I know how it's done in the Classic dashboard, but cannot work out how to do... See more...
How can I remove the "Open in Search" (search magnifying glass) icon/option from a panel in a Dashboard Studio dashboard? I know how it's done in the Classic dashboard, but cannot work out how to do it in Dashboard Studio. Thanks
A recent change to logs has broken my dashboard panels and reporting. I'm struggling to find the best way to modify my search criteria to pick up data prior to the change and after. It's a very simpl... See more...
A recent change to logs has broken my dashboard panels and reporting. I'm struggling to find the best way to modify my search criteria to pick up data prior to the change and after. It's a very simple change as single quotation marks were added around the field but it's giving me a big headache.  index=prd sourcetype=core Step=* Time=* |  timechart avg(Time) by Step span=1d Field in event log changed: FROM: Step=CONVERSION_APPLICATION TO: Step='CONVERSION_APPLICATION'  (with single quotation marks)
Hello: I recently started playing with the Risk framework, RBA etc. Most of my Risk Analysis dashboard is working within Enterprise Security - except for three (3) sections:   Risk Modifiers By A... See more...
Hello: I recently started playing with the Risk framework, RBA etc. Most of my Risk Analysis dashboard is working within Enterprise Security - except for three (3) sections:   Risk Modifiers By Annotations Risk Score By Annotations Risk Modifiers By Threat Object   For the annotations part - we do manually tag Mitre Attack tactics within our content, so not sure why these panels do not show anything. Also, does anyone know what savedsearches run in the background to populate these panels? I'd like to double check to make sure I have these enabled.   Thanks!