All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I tried to install unprivillaged phantom soar on centos 7 but I receive same mistake every time. Can somebody help please. The eror:    Initializing Splunk SOAR settings Failed Splunk SOAR initi... See more...
I tried to install unprivillaged phantom soar on centos 7 but I receive same mistake every time. Can somebody help please. The eror:    Initializing Splunk SOAR settings Failed Splunk SOAR initialization Traceback (most recent call last): File "/home/phantom/soar/splunk-soar/install/console.py", line 207, in run proc = subprocess.run(normalized_cmd, **cmd_args) # noqa: PHANTOM112 File "/home/phantom/soar/splunk-soar/usr/python39/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/home/phantom/soar/bin/phenv', 'python', '/home/phantom/soar/bin/initialize.py', '--first-initialize']' returned non-zero exit status 2. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/phantom/soar/splunk-soar/./soar-install", line 72, in main deployment.run() File "/home/phantom/soar/splunk-soar/install/deployments/deployment.py", line 132, in run self.run_deploy() File "/home/phantom/soar/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/home/phantom/soar/splunk-soar/install/deployments/deployment.py", line 193, in run_deploy operation.run() File "/home/phantom/soar/splunk-soar/install/operations/deployment_operation.py", line 135, in run self.install() File "/home/phantom/soar/splunk-soar/install/operations/tasks/initialize_phantom.py", line 62, in install self.initialize_py("--first-initialize") File "/home/phantom/soar/splunk-soar/install/operations/tasks/initialize_phantom.py", line 33, in initialize_py return self.shell.phenv(cmd, **kwargs) File "/home/phantom/soar/splunk-soar/install/console.py", line 275, in phenv return self.run([phenv] + cmd, **kwargs) File "/home/phantom/soar/splunk-soar/install/console.py", line 224, in run raise InstallError( install.install_common.InstallError: Failed Splunk SOAR initialization install failed.
Hello, We have an account on splunkbase which we have published our apps through and have tried to reset the password on the account but do not receive an email. We do not know how to progress thi... See more...
Hello, We have an account on splunkbase which we have published our apps through and have tried to reset the password on the account but do not receive an email. We do not know how to progress this, our partnership account support just closed our request. Thanks David
Hello everyone! I have Splunk events in the following format:     activity_time: 2023-06-29T12:45:06Z    event_time: 2023-06-29T14:49:42.787Z    shipment_status: delivered    timestamp: 2023-06... See more...
Hello everyone! I have Splunk events in the following format:     activity_time: 2023-06-29T12:45:06Z    event_time: 2023-06-29T14:49:42.787Z    shipment_status: delivered    timestamp: 2023-06-29T14:49:51.069Z    tracking_number:95AAEC4900000 And of course, every event has a time value on top of already provided values, shipment_status can contain values like 'delivered' and 'in_process' and some other values. I need to find the percentage of events with the same 'tracking_number' values, for which events with the shipment_status: in_process values came first, before events with the shipment_status: delivered status values.  It means that I need to filter events based on their tracking_number, find among those events only those with shipment_status: in_process or delivered, and then compare the time of both events and count them to the overall percentage of the filtered events if in_process events were first, for example, 1 hour before the delivered event. I'm very confused with the operators that Splunk uses for the filtering and calculating logic, could someone please help me with the composition of the query? 
Hi all, I'm trying to audit correlation searches in my environment but unable to view the "Last Modified By" "Last Modified Time" using the search below, it is showing me an empty fields what chan... See more...
Hi all, I'm trying to audit correlation searches in my environment but unable to view the "Last Modified By" "Last Modified Time" using the search below, it is showing me an empty fields what changes do we need.
Hi  I want to put a bottom on a dashboard that when I hit it, run bashscript on splunk server, and show this message when it’s done “script executed successfully”.   Any idea? Thanks
Hi All, We have created few macros with below definition and added the macro names in the important critical alerts. ```maintenance_window=true``` Here i want to alert whenever there are som... See more...
Hi All, We have created few macros with below definition and added the macro names in the important critical alerts. ```maintenance_window=true``` Here i want to alert whenever there are some changes made in Macro, particularly I want to alert team when the above definition is uncommented (which stop many of important alerts during maintenance). If someone forgets to comment it back. How can I create an alert for looking at macro? Thanks in Advance Amrutha SK
Hello Splunkers, Do you know if I can forward cooked data from my HF1 to my HF2 ? (I tried from one HF to a Standalone Splunk instance, but never from HF to HF). I am wondering if there is someth... See more...
Hello Splunkers, Do you know if I can forward cooked data from my HF1 to my HF2 ? (I tried from one HF to a Standalone Splunk instance, but never from HF to HF). I am wondering if there is something to set on the HF2 to tell "do not try to parse any incoming data for this input" Thanks a lot, GaetanVP
ID curr_row comparison_result 19 Turn on equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 20 Turn on not equal ... See more...
ID curr_row comparison_result 19 Turn on equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 19 1245 equal 20 Turn on not equal 20 7656 equal 20 7690 not equal 20 8783 equal   For the above table, whenever a comparison_result column value is equal to "not equal", it should copy the corresponding whole row value and insert before that row by changing curr_row value alone to "Turn on" without using mvexpand command. I have tried with mvexpand query, memory issue was there.   Mvexpand query:   | eval row=if(comparison_result=="not equal" AND curr_row!="Turn on",mvrange(0,2),null()) | mvexpand row | eval curr_row=if(row==0,"Turn on",curr_row) | fields - row
Im trying to do this:  aid=0 Overflowexception msg="Print completed" @t<first | [search Overflowexception | stats min(@t) as first by username] which does not work. I want to find the first(min) ... See more...
Im trying to do this:  aid=0 Overflowexception msg="Print completed" @t<first | [search Overflowexception | stats min(@t) as first by username] which does not work. I want to find the first(min) time the users experiences an overflow, and find if ther has been a completed print before this first overflow. What am I doing wrong ? I can do it manually like this: aid=0 Overflowexception   --to find users that has gotten the error aid=0 Overflowexception username=Staale | sort @t  --to find the first error for user aid=0 username=Staale msg="Print completed" @t<2023-06-29T06:32:53.900387z  --time received from the above But this is very timeconsuming and I want to do it all in one command. I usually uses SQL, so I guess my approach is to SQL like. Any ideas is appreciated. -=Staale=
Hello,  Hope you are well I want to etract only TP58304 on this line (8)TP58304 (5)endra(3)ttx(5)local(0) How can i do please?  Thanks
Hello Splunkers, I have a question, would it be possible to assign a specific sourcetype to some logs inside a input stanza, depending on the content of the log itself (based on the key / fields ex... See more...
Hello Splunkers, I have a question, would it be possible to assign a specific sourcetype to some logs inside a input stanza, depending on the content of the log itself (based on the key / fields extracted or some regex...). For instance :     [monitor:///whatever] if foo = bar sourcetype = scr_type_1 else sourcetype = scr_type_1     I have few hope about this one... Thanks a lot, GaetanVP
Hi Splunkers!    Good day!    I need a search which extracts the count of serial_number of different time range and i should calculate the difference, if its greater than 5000, it should trigger ... See more...
Hi Splunkers!    Good day!    I need a search which extracts the count of serial_number of different time range and i should calculate the difference, if its greater than 5000, it should trigger an alert,   index="inventory" origin="Inventory:ITSM" earliest=-6h latest=now() | fields serial_number | stats count(serial_number) as total_assets | search [ search index="inventory" origin="Inventory:ITSM" earliest=-12h latest=-6h | fields serial_number | stats count(serial_number) as total_assets_prev] | eval diff=total_assets_prev-total_assets | where diff>5000 | eval message="Hello Team, the assets in ITSM origin is less than 65000 and its actual value is " | table message, total_assets_prev, total_assets   This query is not working.   Thanks in advance! Manoj Kumar S  
I've got a multisearch query basically using inputlookups to trace a sprawling kafka setup, getting all the various latencies from source to destination and grouping the results per application eg. A... See more...
I've got a multisearch query basically using inputlookups to trace a sprawling kafka setup, getting all the various latencies from source to destination and grouping the results per application eg. AppA avg latency is 1.09sec, AppX avg latency is 0.9secs eg.   Application _time total_avg total_max AppA 28/6/2023 0:00 0.05 0.09 AppA 28/6/2023 1:00 0.05 0.1 AppA 28/6/2023 2:00 0.05 0.08 AppB 28/6/2023 0:00 0.05 0.09 AppB 28/6/2023 1:00 0.22 2.72 AppB 28/6/2023 2:00 0.05 0.09 AppC 28/6/2023 0:00 0.06 0.1 AppC 28/6/2023 1:00 0.05 0.09 AppC 28/6/2023 2:00 0.05 0.09 AppX 28/6/2023 0:00 0.05 0.09 AppX 28/6/2023 1:00 0.04 0.09 AppX 28/6/2023 2:00 0.04 0.09   There are many other numeric results columns but for the sake of simplicity and endgoal of evaluating the SLA% they're irrelevant. I'm trying to extend the query generating this output and make a dashboard to track the SLA across all Applications. Simply was the Apps latency below the Apps specific SLA expectation/threshold and Ok, or was it over and in Breach per span (hourly)... and of course whats the resulting SLA % per App per day/week/month. Using the following query below the above query output:   | makecontinuous _time span=60m | filldown Application | fillnull value="-1" | lookup SLA.csv Application AS Application OUTPUT SLA_threshold | eval spans = if(isnull(spans),1,spans) | fields _time Application spans SLA_threshold total_avg total_max | eval SLA_status = if(total_avg > SLA_threshold, "BREACH", "OK") | eval SLA_nodata = if(total_avg < 0, "NODATA", "OK") | eval BREACH = case(SLA_status == "BREACH", 1) | eval OK = case(SLA_status == "OK", 1) | eval NODATA = case(SLA_nodata == "NODATA", 1) | stats sum(spans) as TotalSpans, count(OK) as OK, count(BREACH) as BREACH, count(NODATA) as NODATA, by Application | eval SLA=OK/(TotalSpans)*100   Which I have mostly working okay and which will return results for a dashboard like:   Application TotalSpans SLA_threshold OK BREACH NODATA SLA % AppA 24 1.5 24 0 1 100 … AppX 24 1 23 0 1 100   But unfortunately, theres a central problem I need to take into account, being that sometimes the apps don't have any data for their latency calculations which end up null, and this is throwing off results for SLA as it results in missing bucket/spans. For sake of space, lets say over a 3hour period AppA is normal with 3x 1h span buckets of latency data output -- the SLA % eval will work fine. But App X has missing results for bucket 01h, looking like this:     Application _time total_avg total_max AppA 28/6/2023 0:00 0.17 2.72 AppA 28/6/2023 1:00 0.04 0.09 AppA 28/6/2023 2:00 0.05 0.1 AppX 28/6/2023 0:00 0.04 1.09 AppX 28/6/2023 2:00 0.04 1.09    The SLA% eval will be off for AppX with one less span.  Ideally, I need to fillin those empty buckets with something not only to correctly count spans per App, so as to not effect the SLA % calculation, but also to flag the missing data somehow. Being able to distinguish between an SLA Breach for data above threshold and a Breach for say no data, or at least the option to choose how i treat it. My current approach to this as above, has been to use makecontinuous _time span=60m and fillnull value="-1" The -1 results can hit an eval for "NODATA" and be taken into account separately to a "BREACH". eg.     Application _time total_avg total_max AppX 28/6/2023 0:00 0.04 1.09 AppX 28/6/2023 1:00 -1 -1 AppX 28/6/2023 2:00 0.04 1.09   Now the eval case logic for diff SLA and data conditions is not optimal or even right (a way to eval things as NODATA and class them as OK would be good).... Eitherway, as mentioned this approach is working okay with the above output when its a single specific App being queried, but once I search Application="*" -- the approach with "makecontinuous _time span=60" and the eval spans & other case logic no longer works as desired, because the _time buckets exist for the other Applications that have all their results data, so makecontinuous doesn't add any missing buckets or fillin "-1" for the Apps that don't.. I've also tried timechart, which will fill things in for all Apps, but then I'm faced with another problem because Applications is a non-numeric field, it adds gazillion columns eg. "total_avg: AppA" ... "total_avg: AppX" etc, theres a dozen other numeric results columns. I'd prefer things more simply output Application specific Any suggestions for a tweak or alternate way to makecontinuous _time work on a per Application basis or a way to simplify or pivot off of the timechart output?
I have the following search index=xoom_app_online_checkout_orchestration_api (level=ERROR AND "Failed to get open-banking realtime balance" AND issue=* ) OR event_type=OPEN_BANKING_REALTIME_BALANCE_... See more...
I have the following search index=xoom_app_online_checkout_orchestration_api (level=ERROR AND "Failed to get open-banking realtime balance" AND issue=* ) OR event_type=OPEN_BANKING_REALTIME_BALANCE_SUCCESS | eval Issue=if(event_type=="OPEN_BANKING_REALTIME_BALANCE_SUCCESS", "OPEN_BANKING_REALTIME_BALANCE_SUCCESS", issue) |  stats count as Count by Issue | eventstats sum(Count) as Total | eval Percentage=round((Count/Total)*100,2) | fields - Total | sort 0 - Count | addcoltotals   I get this result:   Issue                                                                                                Count Percentage OPEN_BANKING_REALTIME_BALANCE_SUCCESS 181 76.05 VALIDATION_ERROR 42 17.65 INVALID_LOGIN_CREDENTIALS 14 5.88 PERMISSION_DENIED 1 0.42   238 100.00   I want to trigger an alert if the "Percentage" value of the row  with Issue= OPEN_BANKING_REALTIME_BALANCE_SUCCESS is < 75 Could not figure out how to add a hidden filed or so... to use as the WHERE clause for the alert
Hello, I’m trying to set up a cluster and I know that I need to have an indexer set up, however, I have no idea how to set up a Splunk indexer. please help me. 
Before 23.6.0, the agent was respecting the parameter:     instrumentationConfig:         numberOfTaskWorkers: 2   After 23.6.0, this config is not been respected and many instrumentation ha... See more...
Before 23.6.0, the agent was respecting the parameter:     instrumentationConfig:         numberOfTaskWorkers: 2   After 23.6.0, this config is not been respected and many instrumentation happens leading to high cpu usage. I tried all parameters but it looks a bug to me, it was working before. I'm using this config. All my java services have readiness and liveness probes.       clusterAgent:         containerProperties:             containerBatchSize: 1             containerParallelRequestLimit: 1             podBatchSize: 1     instrumentationConfig:         numberOfTaskWorkers: 1         podBatchSize: 2   please help
Production had a bug.  One of the results of that bug was massive "over logging" of production nodes and those logs were forwarded (universal forwarder) to our splunk server. Development reverted pr... See more...
Production had a bug.  One of the results of that bug was massive "over logging" of production nodes and those logs were forwarded (universal forwarder) to our splunk server. Development reverted production, but Splunk was "log-jammed" as indicated by the queues for several hours: We know that we can clear the backlog on the clients (splunk universal forwarders) by turning off the forwarder, cleaning out the application logs as well as the following forwarder files: \<application logs> \var\log\splunk\metric* \var\lib\splunk\fishbucket* <restart forwarder>   It seems that several of the forwarders successfully forwarded data, so this jammed up our queues.  I realize that Splunk is designed NOT to lose data, but assuming we were willing to accept some "pending" data loss, is there any way to clear the server side queues or "dump data" from indexing to clear a backlog?   We considered "blacklisting" specific files from indexing as was done in this post, but as indicated by the post, removing un-doing the blacklist results in those files going back for index processing.        
Hey all,  When i run a search like this:        index=crowdstrike_pci sourcetype=crowdstrike:events:sensor event_simpleName=FileIntegrityMonitorRuleMatched | rename CommandLine AS process C... See more...
Hey all,  When i run a search like this:        index=crowdstrike_pci sourcetype=crowdstrike:events:sensor event_simpleName=FileIntegrityMonitorRuleMatched | rename CommandLine AS process ContextTimeStamp AS file_access_time ImageFileName AS file_path ObjectName AS file_name ParentBaseFileName AS parent_process_exec ParentBaseFileName AS parent_process_name ParentCommandLine AS parent_process ParentImageFileName AS parent_process_path ParentProcessId AS parent_process_id RawProcessId AS process_id SHA256HashData AS file_hash UserName AS user aip AS dest event_platform AS os         The fields populate correctly, but when i hit up the Field Alias settings in the GUI to make them permanent, they dont appear in the search. Permissions are set for everyone to read all, and for sc_admin to write them. Its Splunk Cloud so i dont have access to the props.conf unless i upload one myself, but the field alias works for other sourcetypes, just not this one.  Any ideas?  
We keep getting warnings such as  We have gone into the savedsaerch conf files and renames them on a diferent SH but I dont think thats the answer. So my question is what is the answer?
So I have this query that creates and incident if there is 7 outlier  in the last 15 minutes: | streamstats time_window=15m current=true reset_after=(propOutlier=7) sum(isOutlier) as propOutlie... See more...
So I have this query that creates and incident if there is 7 outlier  in the last 15 minutes: | streamstats time_window=15m current=true reset_after=(propOutlier=7) sum(isOutlier) as propOutlier by InterfaceName | eval isIncident = if(propOutlier=7, 1, 0 ) | eventstats max(isIncident) as hasIncident by InterfaceName | where hasIncident=1 Now i would like the add to  "isIncident" the situation where there has been 5 outliers, repeteadly, in the last 3 time window of 15 minutes.  If there has been 5 outliers in the last 15min window, I do not care. But if this happens 3 times in a row, it is a problem for me.  Can anyone help?  Thank you