All Topics

Top

All Topics

I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But thi... See more...
I'm trying to find a simple way to calculate the product of a single column, e.g. value_a 0.44 0.25 0.67 Ideally, I could use something like this: | stats product(value_a)  But this doesn't seem possible.
Hi! I was wondering if anybody had any css/xml code that could be used to hide the "Populating..." text under this drilldown in my dashboard?   
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgre... See more...
I have a DBConnect query that runs to populate the panel of a dashboard every week.  We upgraded both the database which houses the data AND Splunk a couple of weeks ago.  The new database is Postgres 14 and Spunk is not at 9.2.3.  I have run this query directly on the Postgres box, so it appears that Postgres doesn't suddenly have an issue with it. Other panels/queries in this dashboard use the same DBConnect connection, so the path, structure, and data all appear to be good.  The issue seems to lie with the "math" in the time range, but I cannot put my finger on why.  Basically, we are trying to pull a trend of data going back 8 weeks, starting from last week. query=" SELECT datekey, policydisposition, count(guid) as events FROM event_cdr WHERE datekey >= CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) - (7*8) AND datekey < CURRENT_DATE - CAST(EXTRACT(DOW FROM CURRENT_DATE) as int) AND direction_flag = 1 AND policydisposition = 1 GROUP BY datekey, policydisposition ORDER BY datekey, policydisposition When I try to execute this query, I consistently get the following error: "Error in 'dbxquery' command: External search command exited unexpectedly. The search job has failed due to an error. You may be able view the job in the Job Inspector" Some of the search.log file is here: "12-30-2024 16:00:27.142 INFO PreviewExecutor [3835565 StatusEnforcerThread] - Preview Enforcing initialization done 12-30-2024 16:00:28.144 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW 12-30-2024 16:02:27.196 ERROR ChunkedExternProcessor [3835572 phase_1] - EOF while attempting to read transport header read_size=0 12-30-2024 16:02:27.197 ERROR ChunkedExternProcessor [3835572 phase_1] - Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 WARN ReducePhaseExecutor [3835572 phase_1] - Not downloading remote search.log and telemetry files. Reason: No remote_event_providers.csv file. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835572 phase_1] - Ending phase_1 12-30-2024 16:02:27.197 INFO UserManager [3835572 phase_1] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.197 ERROR SearchOrchestrator [3835544 searchOrchestrator] - Phase_1 failed due to : Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO ReducePhaseExecutor [3835565 StatusEnforcerThread] - ReducePhaseExecutor=1 action=QUIT 12-30-2024 16:02:27.197 INFO DispatchExecutor [3835565 StatusEnforcerThread] - Search applied action=QUIT while status=GROUND 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - sid=1735574426.75524, newState=FAILED, message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 ERROR SearchStatusEnforcer [3835565 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1735574426.75524 message_key= message=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.197 INFO SearchStatusEnforcer [3835565 StatusEnforcerThread] - State changed to FAILED: Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.201 INFO UserManager [3835565 StatusEnforcerThread] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO DispatchManager [3835544 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1735574426.75524', username='User338') 12-30-2024 16:02:27.202 INFO UserManager [3835544 searchOrchestrator] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.202 INFO SearchOrchestrator [3835541 RunDispatch] - SearchOrchestrator is destructed. sid=1735574426.75524, eval_only=0 12-30-2024 16:02:27.203 INFO SearchStatusEnforcer [3835541 RunDispatch] - SearchStatusEnforcer is already terminated 12-30-2024 16:02:27.203 INFO UserManager [3835541 RunDispatch] - Unwound user context: User338 -> NULL 12-30-2024 16:02:27.203 INFO LookupDataProvider [3835541 RunDispatch] - Clearing out lookup shared provider map 12-30-2024 16:02:27.206 ERROR dispatchRunner [600422 MainThread] - RunDispatch has failed: sid=1735574426.75524, exit=-1, error=Error in 'dbxquery' command: External search command exited unexpectedly. 12-30-2024 16:02:27.213 INFO UserManagerPro [600422 MainThread] - Load authentication: forcing roles="db_connect_admin, db_connect_user, slc_user, user""
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option ... See more...
Splunkers I'm trying to detect when a user fails GT 5 times in time range of one hour for last 24h, and i have the splq below, but i would like to have an opinion from community if any other option is to use splq logic to do the same? SPLQ Used index=VPN_Something | bin _time span=24h | stats list(status) as Attempts, count(eval(match(status,"failure"))) as Failed, count(eval(match(status,"success"))) as Success by _time user | eval "Time Range"= strftime(_time,"%Y-%m-%d %H:%M") | eval "Time Range"= 'Time Range'.strftime(_time+3600,"- %H:%M") | where Failed > 5  
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls h... See more...
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls help me i am stuck in this
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the fo... See more...
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the following configuration to DS, but it still doesn't work after restarting [indexAndForward] index = true selectiveIndexing = true The deployment server and UF are both version 9.3. What aspects should I check?      
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance ... See more...
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance with version 9.2.2406.109. 
" service error rate 50x 8.976851851851853" field = " service error rate 50x 8.976851851851853" need to extract 8.9 value from above string.
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with C... See more...
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with Category=N3, and values for A to F equals to the sum of those for N1 and N2. I've tried every possible thing I could find but couldn't get it to work, any help is appreciated, thanks!  Current code looks something like this: ... queries & joins... | table "Category" "A" ... "F"
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog serve... See more...
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time.  I have 50+ network components to collect syslog for security monitoring My current architecture,  All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud Kindly suggest me a alternative suggestion to get near-real of network logs.
Hello Splunk SOAR family, hope each of you is doing good. Can anyone has some tips when it comes to installing and configuring the new version of Splunk SOAR?
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA... See more...
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA:5F:72:11:AB, but I'd also like to find all instances of 12-EA-5F-72-11-AB or 12EA.5F72.11AB or even just 12EA5F7211AB without needing to deliberately specify each of these variations? I thought I could do it using TERM(), but so far I haven't had any luck, and after reading the docs, I can see I may have misunderstood that command. Is there anyway to do this simply?
Hello, I have installed splunk in AlmaLinux following a course and facing this error. Thanks  
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are need... See more...
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are needed to stay consistent with our current Vulnerability Management Program.  The state field is an absolute must to report on active and fixed vulnerabilities. Thank in advance for any assistance provided. 
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access ... See more...
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access to the mvmap() function. Working on getting that updated, but until then I still need to try to get a solution to this problem figured out. This operation is not the only piece of logic I'm trying to accomplish here. Assume there are other unnamed fields which are already in a specifically sorted way which we do not want to disturb.   I'm attempting to filter out all elements of a list which match the first element, leaving only the elements which are not a match. Here is an example which does work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, "A75CD")) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   However, when I attempt to switch it out for something like the following, it does not work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, mvindex(base,0))) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I have even attempted to solve it using a foreach command, but was also unsuccessful:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | foreach mode=multivalue base [eval filtered_mv = if('<<ITEM>>'!=mvindex(base,0), mvappend(filtered_mv,'<<ITEM>>'), filtered_mv)] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I'm open to any other ideas which might accomplish this better or more efficiently. Not sure where I'm going wrong with this one, or whether this idea is even possible.
n/a
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with ... See more...
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with 100% CPU utilization for quite some time, checking the "docker container logs" for the Chrome containers we find this error message "Timed out receiving message from renderer: 600.000." Failed trying to initialize chrome webdriver: Message: timeout: Timed out receiving message from renderer: 600.000 (Session info: chrome-headless-shell=127.0.6533.88) Any ideas of what might be causing this issue? Thanks, Roberto
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky... See more...
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | dedup NB | eval NB = tostring(trim(NB)) | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO        This above is my source code And the raw data is       Time Event 27/12/2024 17:05:39.000 32265376;DEAD;3887.00000000;XAU;CURR;FXD;FXD;CM TR GLD AUS;X_CMTR XAU SWAP host = APPSG002SIN0117source = D:\SkyNet\data\mx_trade_report\MX2_TRADE_STATUS_20241227_200037.csvsourcetype = mx_to_sky Time Event 27/12/2024 18:05:36.651 2024-12-27 18:05:36.651, system="murex", id="645131777", sky_id="645131777", trade_id="32265483", event_id="100023788", mx_status="DEAD", operation="NETTING", action="insertion", tradebooking_sgp="2024/12/26 01:02:01.0000", eventtime_sgp="2024/12/26 01:01:51.7630", sky_to_mq_latency="-9.-237", portfolio_name="I CREDIT INC", portfolio_entity="ANZSEC INC", trade_type="BondTrade" host = APPSG002SIN0032source = sky_trade_murex_timestamp sourcetype = sky_trade_murex_timestamp  
Hello, After deploying the Splunk Universal Forwarder on a Windows machine, I am observing repeated process creation alerts being triggered by my security monitoring solution. These alerts are speci... See more...
Hello, After deploying the Splunk Universal Forwarder on a Windows machine, I am observing repeated process creation alerts being triggered by my security monitoring solution. These alerts are specifically related to the following Splunk processes: splunk-winevtlog.exe splunk-admon.exe splunk-powershell.exe splunk-monitornohandle.exe splunk-regmon.exe splunk-netmon These processes are essential for log collection and monitoring, but the constant alerts are causing noise in our monitoring system. I would appreciate any advice or recommended best practices to handle this issue. Specifically: Are there standard configurations or exclusions that can be applied to suppress alerts for these legitimate processes? What steps can be taken to ensure the forwarder operates as intended without raising unnecessary security flags?