All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls h... See more...
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls help me i am stuck in this
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the fo... See more...
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the following configuration to DS, but it still doesn't work after restarting [indexAndForward] index = true selectiveIndexing = true The deployment server and UF are both version 9.3. What aspects should I check?      
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance ... See more...
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance with version 9.2.2406.109. 
" service error rate 50x 8.976851851851853" field = " service error rate 50x 8.976851851851853" need to extract 8.9 value from above string.
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with C... See more...
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with Category=N3, and values for A to F equals to the sum of those for N1 and N2. I've tried every possible thing I could find but couldn't get it to work, any help is appreciated, thanks!  Current code looks something like this: ... queries & joins... | table "Category" "A" ... "F"
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog serve... See more...
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time.  I have 50+ network components to collect syslog for security monitoring My current architecture,  All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud Kindly suggest me a alternative suggestion to get near-real of network logs.
Hello Splunk SOAR family, hope each of you is doing good. Can anyone has some tips when it comes to installing and configuring the new version of Splunk SOAR?
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA... See more...
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA:5F:72:11:AB, but I'd also like to find all instances of 12-EA-5F-72-11-AB or 12EA.5F72.11AB or even just 12EA5F7211AB without needing to deliberately specify each of these variations? I thought I could do it using TERM(), but so far I haven't had any luck, and after reading the docs, I can see I may have misunderstood that command. Is there anyway to do this simply?
Hello, I have installed splunk in AlmaLinux following a course and facing this error. Thanks  
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are need... See more...
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are needed to stay consistent with our current Vulnerability Management Program.  The state field is an absolute must to report on active and fixed vulnerabilities. Thank in advance for any assistance provided. 
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access ... See more...
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access to the mvmap() function. Working on getting that updated, but until then I still need to try to get a solution to this problem figured out. This operation is not the only piece of logic I'm trying to accomplish here. Assume there are other unnamed fields which are already in a specifically sorted way which we do not want to disturb.   I'm attempting to filter out all elements of a list which match the first element, leaving only the elements which are not a match. Here is an example which does work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, "A75CD")) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   However, when I attempt to switch it out for something like the following, it does not work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, mvindex(base,0))) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I have even attempted to solve it using a foreach command, but was also unsuccessful:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | foreach mode=multivalue base [eval filtered_mv = if('<<ITEM>>'!=mvindex(base,0), mvappend(filtered_mv,'<<ITEM>>'), filtered_mv)] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I'm open to any other ideas which might accomplish this better or more efficiently. Not sure where I'm going wrong with this one, or whether this idea is even possible.
n/a
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with ... See more...
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with 100% CPU utilization for quite some time, checking the "docker container logs" for the Chrome containers we find this error message "Timed out receiving message from renderer: 600.000." Failed trying to initialize chrome webdriver: Message: timeout: Timed out receiving message from renderer: 600.000 (Session info: chrome-headless-shell=127.0.6533.88) Any ideas of what might be causing this issue? Thanks, Roberto
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky... See more...
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | dedup NB | eval NB = tostring(trim(NB)) | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO        This above is my source code And the raw data is       Time Event 27/12/2024 17:05:39.000 32265376;DEAD;3887.00000000;XAU;CURR;FXD;FXD;CM TR GLD AUS;X_CMTR XAU SWAP host = APPSG002SIN0117source = D:\SkyNet\data\mx_trade_report\MX2_TRADE_STATUS_20241227_200037.csvsourcetype = mx_to_sky Time Event 27/12/2024 18:05:36.651 2024-12-27 18:05:36.651, system="murex", id="645131777", sky_id="645131777", trade_id="32265483", event_id="100023788", mx_status="DEAD", operation="NETTING", action="insertion", tradebooking_sgp="2024/12/26 01:02:01.0000", eventtime_sgp="2024/12/26 01:01:51.7630", sky_to_mq_latency="-9.-237", portfolio_name="I CREDIT INC", portfolio_entity="ANZSEC INC", trade_type="BondTrade" host = APPSG002SIN0032source = sky_trade_murex_timestamp sourcetype = sky_trade_murex_timestamp  
What happened if Splunk SOAR license expired? I cannot find a document to explain it.
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\T... See more...
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\.   So I have deployed TA-Exchange-Mailbox from the TA-Exchange App download from Splunkbase.  I also deployed TA-exchange-SMTP.   The TA-exchange-smtp  local/inputs.conf file looks like this - only made a couple changes in the path: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\Edge\ProtocolLog\...\*] index = smtp sourcetype = exchange:smtp added this one after install: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\*] index = smtp sourcetype = MSExch2019:Tracking So I am not 100% sure this is correct. For the TA-Exchange-Mailbox - I have 3 stanzas based upon the info from this forum previous messages: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:MessageTracking queue=parsingQueue index=smtp disabled=0 [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=smtp disabled=false [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpSend] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpSend queue=parsingQueue index=smtp disabled=false Again - I know nothing in regards to this level of data gathering so Im hoping one of you all who have will be able to guide me in the right direction so that I can begin ingesting.
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please ... See more...
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please advise on any functions in SPL that can replace map to achieve this? SPL is as follows:   index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | addinfo | table info_min_time info_max_time earliest_time latest_time | eval earliest_time=strftime(earliest_time,"%F 00:00:00") | eval earliest_time=strptime(earliest_time,"%F %T") | eval earliest_time=round(earliest_time) | eval searchEarliestTime2=if(info_min_time == "0.000", earliest_time, info_min_time) | eval searchLatestTime2=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime2,searchLatestTime2, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | where end <=searchLatestTime2 | eval end=round(end) | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | map search="search earliest=\"$start$\" latest=\"$end$\" index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | dedup day oprt_user_name blng_dept_name oprt_user_acct | stats count as "fwcishu" by day oprt_user_name blng_dept_name oprt_user_acct | eval a=$a$ | eval b=$b$ | stats count as "day_count",values(day) as "qdate",max(day) as "alert_date" by a b oprt_user_name,oprt_user_acct " maxsearches=500000 | where day_count > 2 | eval alert_date=strptime(alert_date,"%F") | eval alert_date=relative_time(alert_date,"+1d") | eval alert_date=strftime(alert_date, "%F") | table a b oprt_user_name oprt_user_acct day_count qdate alert_date   I want to implement statistical analysis of data from 2019 to the present, where a user visits multiple times a day and counts it as one visit, to calculate the continuous number of visits by interval users every 7 days since 2019.    
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on t... See more...
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on this document, but the documentation on metrics seems limited. Specifically, I'm curious about: How storage and indexing differ for event indexes vs. metric indexes under the hood. Why high cardinality is a bigger concern for metric indexes compared to event indexes. I understand from this glossary entry that metric time series (MTS) are central to how metrics work, but I'd appreciate a more in-depth explanation on the inner workings and trade-offs involved. Additionally, if I have a dimension with a unique ID, would it be better to use an event index instead of a metric index? If anyone could shed light on this or point me toward relevant resources, that would be great!