All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls h... See more...
is there any way that we can export the logs from zabbix to splunk via any script or by setting up a HEC collector data input..im trying to display all the logs from my zabbix server to splunk..pls help me i am stuck in this
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the fo... See more...
Hello evereone I encountered an issue, as shown in the image. I can see two machines in the MC's forwarder dashboard, but I don't see any machines in my forwarder management.  I have added the following configuration to DS, but it still doesn't work after restarting [indexAndForward] index = true selectiveIndexing = true The deployment server and UF are both version 9.3. What aspects should I check?      
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance ... See more...
I want to add an endpoint to the webhook allow list. I checked the documentation for that. However, I cannot find "Webhook allow list" under Settings > Server settings. I'm using a trial instance with version 9.2.2406.109. 
Use the rex statement, for example this regex | makeresults | eval field = " service error rate 50x 8.976851851851853" | rex field=field "service error rate\s+\w+\s+(?<value>\d+\.\d)"  (this is an ... See more...
Use the rex statement, for example this regex | makeresults | eval field = " service error rate 50x 8.976851851851853" | rex field=field "service error rate\s+\w+\s+(?<value>\d+\.\d)"  (this is an example you can run in a search window). Change your regex statement to match what you expect in the data 
Without knowing your data, I would suggest you start with index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | bin _time span=7d | stats dc(_time) as days_present earliest(_time) as earliest_time... See more...
Without knowing your data, I would suggest you start with index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | bin _time span=7d | stats dc(_time) as days_present earliest(_time) as earliest_time latest(_time) as latest_time count as "fwcishu" by _time day oprt_user_name blng_dept_name oprt_user_acct | eventstats min(*_time) as all_*_time which will give you a breakdown of all the data you need with the earliest and latest for each grouping. It will count number of days per 7 day period (days_present) and group by week and all your other grouping parameters. You can calculate overall earliest/latest date with eventstats. Then I would expect you can manipulate your data from that result set to get you what you want Using map is not the right solution. If you share some of the data and mock up an example of what you're trying to end up with, it would help.
" service error rate 50x 8.976851851851853" field = " service error rate 50x 8.976851851851853" need to extract 8.9 value from above string.
Thank you for your reply, but the statement you provided cannot achieve the result of looping each start time and end time
Thank you for your response. To achieve this, we will run the query every 7 days in a loop from the end time until the earliest start time of the data, and write the results to the intermediate index
Thanks guys! index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex fi... See more...
Thanks guys! index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | fields sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | head 1000 | stats values(*) as * by NB | fillnull | head 1 | transpose 0 Now its this I get a 1000 events for an hour range estimated and results as shown column row1 NB 0 action 0 event_id 0 mx_status live operation sydeod portfolio_entity SG KOREA USA ... portfolio_name AUD APT ... sky_id 673821 ... trade_type VanillaSwap tradebooking_sgp 2024/12/26 00:06:34.3572 ...  
Just ask the "Men with the Black Fez"     
I worked it out just now too with appendpipe, thanks a lot for your detailed response. I should've marked your response as the solution.. Thanks again!
Fixed it myself with appendpipe
--if your base search includes N1, N2, N3, ... Nn, you can add additional logic to generate the N-value dynamically:   | appendpipe [| search Category=N* | stats count sum(*) as * | ... See more...
--if your base search includes N1, N2, N3, ... Nn, you can add additional logic to generate the N-value dynamically:   | appendpipe [| search Category=N* | stats count sum(*) as * | eval Category="N".tostring(count+1) | fields - count ]   Or if you want to sub all Category values:   | appendpipe [| rex field=Category "(?<Category>[^\d]+)" | stats count sum(*) as * by Category | eval Category=Category.tostring(count+1) | fields - count ]   => Category A B C D E F N1 1 2 4 2 4 1 N2 0 5 4 3 5 7 M1 1 0 1 0 4 3 M2 1 1 3 5 0 1 U1 0 4 6 5 4 3 M3 2 1 4 5 4 4 N3 1 7 8 5 9 8 U2 0 4 6 5 4 3 But note that you'll need to add custom sorting logic if you want something other than the default sort order.
Ah, that detail wasn't clear from the original message. You can use the appendpipe command to stream the base search results through a subsearch and then append the subsearch results to base search r... See more...
Ah, that detail wasn't clear from the original message. You can use the appendpipe command to stream the base search results through a subsearch and then append the subsearch results to base search results:   | makeresults format=csv data="Category,A,B,C,D,E,F N1,1,2,4,2,4,1 N2,0,5,4,3,5,7 M1,1,0,1,0,4,3 M2,1,1,3,5,0,1 U1,0,4,6,5,4,3" | table Category * | appendpipe [| search Category=N* | stats sum(*) as * | eval Category="N3" ]   => Category A B C D E F N1 1 2 4 2 4 1 N2 0 5 4 3 5 7 M1 1 0 1 0 4 3 M2 1 1 3 5 0 1 U1 0 4 6 5 4 3 N3 1 7 8 5 9 8
Sorry I'm still not quite getting it, this is what I would like to achieve: Category A B C D E F N1 1 2 4 2 4 1 N2 0 5 4 3 5 7 M1 1 0 1 0 4 3 M2 1 1 3 5 0... See more...
Sorry I'm still not quite getting it, this is what I would like to achieve: Category A B C D E F N1 1 2 4 2 4 1 N2 0 5 4 3 5 7 M1 1 0 1 0 4 3 M2 1 1 3 5 0 1 U1 0 4 6 5 4 3 I would like to create an additional row that is the sum of N1 and N2, and append it to the table above. N3 1 7 8 5 9 8
You can put whatever you want in the label argument. It sets the value of the field specified by labelfield in the totals row: | addcoltotals labelfield=Category label="Total" | addcoltotals label... See more...
You can put whatever you want in the label argument. It sets the value of the field specified by labelfield in the totals row: | addcoltotals labelfield=Category label="Total" | addcoltotals labelfield=Category label="SUM[n1..nN]" | addcoltotals labelfield=Category label="Life, the Universe and Everything"
Hi thanks for the reply, I have more rows other than N1 and N2
Hi @MachaMilkshake, The addcoltotals command should do exactly what you need: | addcoltotals labelfield=Category label=N3
Hi @ranjith4, What is the aggregate throughput of all sources? If you're unsure, what is the peak daily ingest of all sources? Splunk Universal Forwarder uses very conservative default queue sizes ... See more...
Hi @ranjith4, What is the aggregate throughput of all sources? If you're unsure, what is the peak daily ingest of all sources? Splunk Universal Forwarder uses very conservative default queue sizes and a throughput limit of 256 KBps. As a starting point, you can disable the throughput limit in $SPLUNK_HOME/etc/system/local/limits.conf: [thruput] maxKBps = 0 If the forwarder is still not delivering data as quickly as it arrives, we can adjust output queue sizes based on your throughput (see Little's Law). As @PickleRick noted, the forwarder may be switching to an effectively single-threaded batch mode when reading files larger than 20 MB. Increase the min_batch_size_bytes setting in limits.conf to a value larger than your largest daily file or some other arbitrarily large value [default] # 1 GB min_batch_size_bytes = 1073741824 If throughput is still an issue, you can enable additional parallel processing with the server.conf parallelIngestionPipelines setting, but I wouldn't do that until after tuning other settings.
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with C... See more...
I have created a table with a series of outer joins. I now have column 'Category', and another 6, call them A to F. in Category, I have values N1 and N2, and I would like to create a new row with Category=N3, and values for A to F equals to the sum of those for N1 and N2. I've tried every possible thing I could find but couldn't get it to work, any help is appreciated, thanks!  Current code looks something like this: ... queries & joins... | table "Category" "A" ... "F"