All Topics

Top

All Topics

Hi,  I am building alert in Splunk. I have a log with 6 different variables, but I am actually interested only in 4 of them (A, B, C and D). Those variables usually have a value which is a number li... See more...
Hi,  I am building alert in Splunk. I have a log with 6 different variables, but I am actually interested only in 4 of them (A, B, C and D). Those variables usually have a value which is a number like 50 but it can also be 'unknown' this is a log sample event: {        responseStatus: 200,        calculationBreakdown: {          evaluation: {             A: unknown            B: unknown            C: unknown            D: unknown            E: 50            F: unknown            }       } } I am trying to do the stats for number of 'unknown' values for each variable and total calls; Then I can calculate percentage of 'unknown' for each which is treated as error and then fire my alert based on those stats So I tried simple query: index=someIndex app=someApp event.responseStatus=200  | stats count as total,  sum(eval(if('event.calculationBreakdown.evaluation.A'==unknown, 1, 0))) as total_errors_for_A wanted to do the same for errors for B, C and D, but this does not work at all, it just calculates total for all the requests but 0 for errors_A and I know there are some A =. unknown in the stats so it should be counted;  when I change sum to count it shows the same number in both columns for total and for total_errors_for_A I also tried different quotes for 'event.calculationBreakdown.evaluation.A' and unknown, single / double / no quotes also added spath 'event.calculationBreakdown.evaluation.A' before | stats but that does not change anything Is anyone able to help? I am pretty sure it is something super simple, but my mind goes blank thanks a million  
Hello, When I try to start the watchdog, it does not start and gets the following error. Because the watchdog does not start, db replication does not work either. Sql Thread for relay log not runni... See more...
Hello, When I try to start the watchdog, it does not start and gets the following error. Because the watchdog does not start, db replication does not work either. Sql Thread for relay log not running -replication error
I'm looking for a way to search all indexes available for each role in Splunk (including access inherited from other roles). This search almost does this:       | rest /servicesNS/-/-/auth... See more...
I'm looking for a way to search all indexes available for each role in Splunk (including access inherited from other roles). This search almost does this:       | rest /servicesNS/-/-/authorization/roles count=0 splunk_server=local | fields title,srchIndexesAllowed | rename srchIndexesAllowed as Indexes, title as Role | search Indexes=*       However, this does not account for inherited indexes. Listing indexes available for a single role is fairly easy (but time consuming): Under Settings -> Roles ->  Select a role (or Edit) Open "Indexes" Tab Filter "Show Selected" from the far right column. ----------------------- Is there a way to get this list (for all roles) from SQL?   
Hello everyone, I have not started yet, I wanted to progress with the sharing of ideas from you. I have 4 appdynamics servers 1 es 1ec 2 controller. Every month, I turn off their manual failover i... See more...
Hello everyone, I have not started yet, I wanted to progress with the sharing of ideas from you. I have 4 appdynamics servers 1 es 1ec 2 controller. Every month, I turn off their manual failover in patch passes, respectively. I want to automate it via ansible , vro or servicenow instead of doing it manually. For example; 1 es server will be stopped, after the patch guarantees running, it will restart and wake up the service. He will repeat 2 Ec as we will meet. 3. if it wants to fail over it will check and stop and restart whichever server is secondary. 4. After starting the failover and switching to the other server, it will perform the same steps on the previous first server and print the successful or error messages as the last log on all servers. I look forward to your thoughts on this. I would like to make an additional correction: Patches are automatically passed to the server, but it is only expected to be restarted. What I want to do is to include failover in this restart sequence. that is, just removing the restart operation from the manual and automating it.
I have a doubt. If we are using heavy forwarder to parse the data and forward it to indexers, does it need Enterprise license or just the forwarder license? Can I use something like-> ./splunk edi... See more...
I have a doubt. If we are using heavy forwarder to parse the data and forward it to indexers, does it need Enterprise license or just the forwarder license? Can I use something like-> ./splunk edit licenser-groups Forwarder -is_active 1. I don't want to use my HF as deployment server or LM or Monitoring Console. It's use is going to be to just  parse data received from UF and forward it to peers or indexers.  Let's say now if I install add-ons like Splunk db connect. Then, would it require Enterprise license or just forwarder license would suffice? I am still just forwarding the data.    
While running below search I am not getting any events: index=main_vulnerability_database sourcetype=vulnerability_overview _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493"... See more...
While running below search I am not getting any events: index=main_vulnerability_database sourcetype=vulnerability_overview _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493" _cd=0:1015   But while adding search command I am getting events: index=main_vulnerability_database sourcetype=vulnerability_overview | search _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493" _cd=0:1015   Ideally both should give same results.  Looking for reason why it is happening.
Hello team So far we have been ignoring error ERROR HTTPClient - Should have gotten at least 3 tokens in status line, while getting response code. Only got 0. present in almost each search.l... See more...
Hello team So far we have been ignoring error ERROR HTTPClient - Should have gotten at least 3 tokens in status line, while getting response code. Only got 0. present in almost each search.log from our use cases, however looking at the comment from Splunk employee here
After Splunk upgrade 9.1.0.2 activity, we have found many errors and below is one of them, This error is in all the indexers present in the environment.   7.7% 07-06-2023 07:54:39.849 +0000 ERROR... See more...
After Splunk upgrade 9.1.0.2 activity, we have found many errors and below is one of them, This error is in all the indexers present in the environment.   7.7% 07-06-2023 07:54:39.849 +0000 ERROR SearchProcessRunner [45132 PreforkedSearchesManager-0] - preforked search=0/21892 on process=0/11297 caught exception. completed_searches=2, process_started_ago=32.000, search_started_ago=0.034, search_ended_ago=0.000, total_usage_time=3.848
Hello everyone, Trust you are all having a lovely day, Please i want to find out if there are any activities  i can do   periodically to maintain my appdynamics setup and prevent any possible downt... See more...
Hello everyone, Trust you are all having a lovely day, Please i want to find out if there are any activities  i can do   periodically to maintain my appdynamics setup and prevent any possible downtime Thanks
Encountering random skipped searches/ slow ui access.
How can i create a stacked bar graph showing the different log levels (Error, Info, Debug)  generated by  each  Process  index="intau_workfusion" sourcetype=workfusion.out.log host=* | rex "^(?... See more...
How can i create a stacked bar graph showing the different log levels (Error, Info, Debug)  generated by  each  Process  index="intau_workfusion" sourcetype=workfusion.out.log host=* | rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?<Process>[^\]]*)\]\s*\[(?<Step>[^\]]*)\]\s*\[(?<User>[^\]]*)\]\s*[^\[]+\s\[(?<Log_level>[^\]]+)" | search Log_level="*" | where Process != ""
I Have Splunk Enterprise (Windows) single entity and the indexes are in the drive and it is full and I have added new desk F: drive  I want to move my indexes to the new drive do I need to speci... See more...
I Have Splunk Enterprise (Windows) single entity and the indexes are in the drive and it is full and I have added new desk F: drive  I want to move my indexes to the new drive do I need to specify any change related to the new drive  
Hello all, I have created a dashboard with dashboard studio and have a list of visualizations for groups of servers [CPU Usage, memory Usage, Disk IO etc.,] If i have to navigate to a certain set... See more...
Hello all, I have created a dashboard with dashboard studio and have a list of visualizations for groups of servers [CPU Usage, memory Usage, Disk IO etc.,] If i have to navigate to a certain set of visualizations, I have to scroll a long list of other visaualizations. Is it possible to navigate to certain section within the same dashboard.?   Thank you,  
Currently, we group relevant alerts in alertmanager and then send them to Splunk on-call to make incident management more friendly. However, when a single alert within an incident is resolved, the en... See more...
Currently, we group relevant alerts in alertmanager and then send them to Splunk on-call to make incident management more friendly. However, when a single alert within an incident is resolved, the entire incident is marked as resolved. If is it possible not to mark the incident as resolved until all grouped alerts are resolved?
Hi, I want to prevent alerts from being skipped and I'm fine, that the alerts don't run at a specific time. I prefer to be notified with a delay than not at all.  One option is to set a schedule ... See more...
Hi, I want to prevent alerts from being skipped and I'm fine, that the alerts don't run at a specific time. I prefer to be notified with a delay than not at all.  One option is to set a schedule window. First of all, I'm wondering why the Alert Editing does not offer this option like reports do. I have to navigate to the Advanced Edit Mode to configure the schedule window. When it is configured, we allow the scheduler to delay the dispatch time. But at some point the search will be skipped anyway. Another option is to use the scheduling mode "continuous".  As far as I understand it, an alert with mode "continuous" is never skipped, which sounds reasonable to have a security monitoring without gaps.  I assume the scheduler will try to run the search as soon as possible. Is the continuous mode a best practice to avoid gaps or are there valid reasons not to use it? If the mode is used it might be a good idea to observe the scheduler lag more closely to determine "how late" alerts run and if the scheduler is building a huge backlog of delayed searches. I don't know how the scheduling_mode interacts with the schedule window. Does the schedule window have any effect, when the mode is "continuous"?
I have an alert set up to detect multiple invalid user credential sign in attempts, which runs once every 24 hours at 9am. However, once 9am rolls around I get an excessive number of alert emails fo... See more...
I have an alert set up to detect multiple invalid user credential sign in attempts, which runs once every 24 hours at 9am. However, once 9am rolls around I get an excessive number of alert emails for each of the invalid user credential sign in attempts. I'd love it if there was just one email with all the alerts listed in the email
Hello community. I'm trying to extract information from a string type field and make a graph on a dashboard. In the graph, I want to group identical messages. I encounter difficulties when grouping ... See more...
Hello community. I'm trying to extract information from a string type field and make a graph on a dashboard. In the graph, I want to group identical messages. I encounter difficulties when grouping a type of message that contains information about an id, which is different for each message and respectively for each message it returns a separate value. Ex. message: {"status":"SUCCESS","id":"123456789"}. I use this query: "source" originalField AND ("SUCCESS" OR "FAILURE") | stats count by originalField This query groups my fields that contain a FAILURE status, but does not group the SUCCESS ones because they have different IDs. I tried different substrings but it doesn't work. Can someone give me a solution?
<6>2023-08-17T04:51:52Z 49786672a6c4 PICUS[1]: {"common":{"unique_id":"6963f063-a68d-482c-a22a-9e96ada33126","time":"2023-08-17T04:51:51.668553048Z","type":"","action":"","user_id":0,"user_email":"",... See more...
<6>2023-08-17T04:51:52Z 49786672a6c4 PICUS[1]: {"common":{"unique_id":"6963f063-a68d-482c-a22a-9e96ada33126","time":"2023-08-17T04:51:51.668553048Z","type":"","action":"","user_id":0,"user_email":"","user_first_name":"","user_last_name":"","account_id":7161,"ip":"","done_with_api":false,"platform_licences":null},"data":{"ActionID":26412,"ActionName":"Zebrocy Malware Downloader used by APT28 Threat Group .EXE File Download Variant-3","AgentName":"VICTIM-99","AssessmentName":"LAB02","CVE":"_","DestinationPort":"443","File":"682822.exe","Hash":"eb81c1be62f23ac7700c70d866e84f5bc354f88e6f7d84fd65374f84e252e76b","Result":{"alert_result":"","has_detection_result":false,"logging_result":"","prevention_result":"blocked"},"RunID":109802,"SimulationID":36236,"SourcePort":"51967","Time":5}} I have a raw log like that, can you help me to parsing it into seperated lines ?
Hey Fellow Splunkers,   I'm having a bit of trouble perhaps understanding how this works and whether I'm doing this correct. Currently on version 9.0.2   Scenario Product Logs -> Syslog(H... See more...
Hey Fellow Splunkers,   I'm having a bit of trouble perhaps understanding how this works and whether I'm doing this correct. Currently on version 9.0.2   Scenario Product Logs -> Syslog(Has a HF on it) -> IDX Syslog is writing to one singular file that I monitor and it has multiple time formats in it and different line breaking. I basically want to bring in all the syslog from this certain product into one sourcetype, kinda like a staging area, then split them out based on REGEX. This is what I've got so far. Most of this is dummy data so dont worry about scrutinizing it for typo's etc.    Configuration This is all on the HF Inputs.conf [monitor://path/to/product/syslogs] index = syslog sourcetype = product_staging   Props.conf [product_staging] TRANSFORMS = change_sourcetype_one, change_sourcetype_two   [sourcetype_one] LINE_BREAKER = A line breaking example TIME_FORMAT = %m-%a-%d %H:%M:%S TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30   [sourcetype_two] LINE_BREAKER = A line breaking example TIME_FORMAT = %C-%b-%a %M:%k:%S TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30   Transforms.conf [change_sourcetype_one] DEST_KEY = MetaData:Sourcetype REGEX = (DataOne) FORMAT = sourcetype::sourcetype_one [change_sourcetype_two] DEST_KEY = MetaData:Sourcetype REGEX = (DataTwo) FORMAT = sourcetype::sourcetype_two   I can get the data to split, easily, my issue is, when it splits off into the different sourcetypes, the INDEXING TIME features, like TIME_FORMAT, TIME_PREFIX, LINE_BREAKER etc don't take effect on the new sourcetypes that were made from a split.   Is it simply because the original sourcetype [product_staging] has touched the data with its own settings, and now the other sourcetypes can't apply their own? I honestly don't understand what I'm doing wrong. Any help would be greatly appreciated.
I have to change my UBA instance IP because infra change.  After IP change was done, part of the UBA couldn't be brought up again. I did health check and found it's jammed by docker sock.  Anyon... See more...
I have to change my UBA instance IP because infra change.  After IP change was done, part of the UBA couldn't be brought up again. I did health check and found it's jammed by docker sock.  Anyone has such experience how to fix it? I saw some solutions like adding user to /var/run/docker.sock permission group. But I'm curious user "caspida" is permitted to sudo ALL command already. So that's the problem? In addition, all configuration I can see is per hostname. Not sure why IP change would have problem. I'm runing single instance version 5.2 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?