All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So w... See more...
Hi All,  I have few concerns regarding buck rolling criteria my question is more focused on hot bucket. So we have 2 types of index 1.  Default  2. Local or customized index  So when I check the log retention of default index  Hot it shows 90 days  Maxbucketcreate=auto Maxdbsize=auto And we don't define anything for for local index  So while checking we fig out like for a particular index we can only have 55 days of logs in out hot bucket n when we see the log consumption for this index is nearly about 12-14gb per day  And for other local index we can see more than 104 days of logs  My concern is what retention policy splunk is following to roll the bucket for local index 1. 90 days period (which is not happening here) 2. When the hot bucket is full per day wise basis( if splunk is following this then how much data a index can store per day n how many hot bucket we have for local index n how much data each bucket can contain)  Hope im not confusing  Thanks 
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of th... See more...
Hi , I have search like below where the logs are coming from the fig1,fig4,fig5,fig6 indexes from either of the 2 hosts say host1 and host2.  So at a time 2 hosts won't send logs and only any of the host will be sending the logs actively to fig1 index with source type as abc.     | tstats latest(_time) as latest_time WHERE (index = fig*) (NOT index IN (fig2,fig3,)) sourcetype="abc" by host index sourcetype | eval silent_in_hours=round(( now() - latest_time)/3600,2) | where silent_in_hours>20 | eval latest_time=strftime(latest_time, "%m/%d/%Y %H:%M:%S")     I want to build logic to display if any of the host1 or host2 is sending the logs then the above query should not give any o/p (should not display the silent host because we are getting the log from other host). Thanks in advance
hello I use the search below in order to timechart events on the field "BPE - Evolution du ratio de perte de paquets" It works fine but is there way to do the same thing easily please?     ... See more...
hello I use the search below in order to timechart events on the field "BPE - Evolution du ratio de perte de paquets" It works fine but is there way to do the same thing easily please?     `index` sourcetype="netproc_tcp" ezc="BPE" | fields netproc_tcp_retrans_bytes site | bin _time span=30m | stats sum(netproc_tcp_retrans_bytes) as "PaquetsPerdusBPE" by _time site | search site="$site$" | append [| search `index` sourcetype="netproc_tcp" ezc="BPE" | fields netproc_tcp_total_bytes site | bin _time span=30m | stats sum(netproc_tcp_total_bytes) as "PaquetsGlobauxBPE" by _time site ] | search site="$site$" | stats last("PaquetsPerdusBPE") as "BPE - Paquets perdus (bytes)", last("PaquetsGlobauxBPE") as "BPE - Nombre total de paquets (bytes)" by _time site | eval "BPE - Evolution du ratio de perte de paquets" = ('BPE - Paquets perdus (bytes)' / 'BPE - Nombre total de paquets (bytes)') * 100 | fields - "BPE - Paquets VMware perdus (bytes)" "BPE - Nombre total de paquets (bytes)" site  
Hello community I’m trying to figure out how to perform a search which considers events on different days. The idea is to search for an events by IP address and what I’d like to achieve is to check... See more...
Hello community I’m trying to figure out how to perform a search which considers events on different days. The idea is to search for an events by IP address and what I’d like to achieve is to check if the same IP (the same type of event) is observed in more than one specified timeframe (day/week/month). I started out with the following: <base-search> earliest="-7d@d" latest="@d" | stats count by ip date And thought I could compare if IP address occurs on more than one date. Though I suppose I’d have to loop through all the results for each IP and I could not get the SPL to work at all. Instead I figured that I could use something like | bin span=1d _time | stats count as c_ip by _time I figured I could compare content of bins somehow, though the bins are still just by “date”. I figured I’d be able to combine this with something like “eval” to get IP addresses which has events on more than one date in rage, preferably with number of events per date/bin and a total. Thi smay also need some "fillnull" or something. IP 2022-06-29 2022-07-01 2022-07-02 2022-07-12 Sum <ip1> 6 5 8 2 21 <ip2> - 5 - 4 9 Though I am not having any success. I hope I managed to articulate my idea here. If so, is what I’m aiming fore possible? Any suggestions/feedback is greatly appreciated, close enough would be a lot better than nothing Best regards // G
Hello, I have some field values which I am unable to replace with the 'replace' command in the csv file. I have Power States of servers which are Powered On and Powered Off and there are some fields ... See more...
Hello, I have some field values which I am unable to replace with the 'replace' command in the csv file. I have Power States of servers which are Powered On and Powered Off and there are some fields which have both powered on and powered off status like: server name PoweredOn server name PoweredOff server name poweredOn poweredOff server name poweredOn poweredOff suspended server name poweredOff PoweredOn poweredOff   I was able to change the field value of "poweredOn poweredOff suspended" with |replace  "*poweredOff poweredOn suspended*" with "*Suspended*" but when I change the command with |replace  "*poweredOn poweredOff*" with "*PoweredOn*" it doenst reflect. Can anyone tell me how to replace these?
How do I sort the data based on the last word after hypen data_file_hyper_v_server data_file_linux_server data_file_vmware_instance data_file_win_server Expected output data_file_hyper_v_ser... See more...
How do I sort the data based on the last word after hypen data_file_hyper_v_server data_file_linux_server data_file_vmware_instance data_file_win_server Expected output data_file_hyper_v_server data_file_linux_server data_file_win_server data_file_vmware_instance
Hi everyone, I want to create an hourly alert that logs the multiple server's CPU usage, queue length, memory usage and disk space used.  I have managed to create the following query which helps me... See more...
Hi everyone, I want to create an hourly alert that logs the multiple server's CPU usage, queue length, memory usage and disk space used.  I have managed to create the following query which helps me lists out my requirements nicely in the following image. (Note that in the image, the log is set to output every 5 minutes only, hence the null values in the image below)   index=* host=abc_server tag=performance (cpu_load_percent=* OR wait_threads_count=* OR mem_free_percent=* OR storage_free_percent=*) | eval cpu_load = 100 - PercentIdleTime | eval mem_used_percent = 100 - mem_free_percent | eval storage_used_percent = 100 - storage_free_percent | timechart eval(round(avg(cpu_load),2)) as "CPU Usage (%)", eval(round(avg(wait_threads_count), 2)) as "Queue Length", eval(round(avg(mem_used_percent), 2)) as "Memory Used (%)", eval(round(avg(storage_used_percent), 2)) as "Disk Space Used (%)"       For the next step however, I am unable to insert the host's name as another column. Is there a way where I can insert a new column for Host Name in a timechart as shown below? Host name _time CPU Usage Queue Length Memory Usage Disk Space Usage abc_server 2022-07-21 10:00:00 1.00 0.00 37.30 9.12 efg_server 2022-07-21 10:00:00 0.33 0.00 26.50 8.00 your_server 2022-07-21 10:00:00 9.21 0.00 10.30 5.00 abc_server 2022-07-21 10:01:00 1.32 0.00 37.30 9.12 efg_server 2022-07-21 10:01:00 0.89 0.00 26.50 8.00 your_server 2022-07-21 10:01:00 8.90 0.00 10.30 5.00   Thanks in advance. 
How to collect data from Netapp into splunk can someone suggest 
Hi all, I found that searches in my unix index returns events only up to the past two months for a significant number of sourcetypes (bash_history, audit, secure, sudo logs). Shouldn't the events... See more...
Hi all, I found that searches in my unix index returns events only up to the past two months for a significant number of sourcetypes (bash_history, audit, secure, sudo logs). Shouldn't the events be retained according to the retention period set using 'frozenTimePeriodInSecs'? We set the period to 365 days.   Regards, Zijian  
Hey All I have this search, and I want two results on my visualization. I want to see both "Method" and "User". What is missing here index=XXX sourcetype="XXX:XXX:message" data.logName="proj... See more...
Hey All I have this search, and I want two results on my visualization. I want to see both "Method" and "User". What is missing here index=XXX sourcetype="XXX:XXX:message" data.logName="projects/*/logs/cloudaudit.googleapis.com%2Factivity" data.resource.labels.project_id IN (*) AND ( data.resource.type IN(*) (data.protoPayload.methodName IN ("*update*","*patch*","*insert*" ) AND data.protoPayload.authorizationInfo{}.permission IN ("*update*","*insert*")) OR (data.resource.type IN(*) (data.protoPayload.methodName IN ("*create*", "*insert*") AND data.protoPayload.authorizationInfo{}.permission="*create*")) OR (data.resource.labels.project_id IN (*) AND data.resource.type IN(*) data.protoPayload.methodName IN (*delete*))) | eval name1='data.protoPayload.authorizationInfo{}.resourceAttributes.name' | eval name2='data.protoPayload.authorizationInfo{}.resource' | eval Name=if(name1="-", name2,name1) |search Name!="-" | rename data.protoPayload.methodName as Method, data.resource.type as "Resource Type", data.protoPayload.authorizationInfo{}.permission as Permission, data.timestamp as Time, data.protoPayload.authenticationInfo.principalEmail as User, data.protoPayload.requestMetadata.callerIp as "Caller IP" | timechart count by Method
Hello community I am trying to combine two different things and cannot figure out how. I am looking at a certain action and counting how many times this is observed per IP address and day. Then I’m ... See more...
Hello community I am trying to combine two different things and cannot figure out how. I am looking at a certain action and counting how many times this is observed per IP address and day. Then I’m plotting per IP by day to try to find recurring events based on IP address. I got this far: <base-search> earliest="-7d@d" latest="@d" | chart count over ip by date_wday Which does exactly what is intended, tough the details are lost as there are a lot if single events per IP address. So, I’d like to filter out any IP address with only 1 event during the period (here one week). This works fine for filtering: | stats count as sum by ip | search summa > 1 But then I loose the details needed for the chart part. I figured maybe I could use eval to filter out based on total count but could not put together anything which worked. Even when I tried to combine stats and eval I either failed or ended up with something which could not be presented graphically. Any suggestions are more than appreciated Best regards // G
Hi All, I am trying to create an efficient way to pull out certain win events for my report but I am not sure it would return the results I want. It truncates some of the results. I might be doing s... See more...
Hi All, I am trying to create an efficient way to pull out certain win events for my report but I am not sure it would return the results I want. It truncates some of the results. I might be doing something wrong. Please see the code that I am currently running and suggest an improvement. Thank you all!   index=mbda_windows_server sourcetype=XmlWinEventLog EventCode=4718 OR 4728 OR 4729 OR 4730 OR 4732 OR 4733 OR 4756 OR 4757 OR 4762 OR 4796 OR 5136 | dedup src_user, MemberSid, Group_Domain, Group_Name, host, _time  | convert timeformat="%d/%m/%Y %H:%M" ctime(_time) | rename src_user AS Login, MemberSid AS Account, Group_Domain AS Domain, Group_Name AS Group, host AS Host, _time AS Min_NormDateMin, name AS EventName | table Login, Account, Domain, Group, Host, Min_NormDateMin, EventCode, EventName | sort EventCode
How many values are allowed in an IN clause which is part of where clause? I want to read 277 values to be precise. index=abc sourcetype="ccinfrap.dc_pqr_jws:app" "[SubscriptionService] Consu... See more...
How many values are allowed in an IN clause which is part of where clause? I want to read 277 values to be precise. index=abc sourcetype="ccinfrap.dc_pqr_jws:app" "[SubscriptionService] Consumer sent message" "Not Predicted to Finish" | rex mode=sed "s/^.*message {/{/" | rex mode=sed "s/\n}.*/\n}/" | spath | fillnull jobStreamName value="BLANK" | where jobStreamName IN( "stream1" ,"stream2" ,"stream3" . . ,"stream277" )
Please help to understand the logic of below query  eval count=if(isnull(count), -1,count)
Hello Splunker!! Can you please help me out to understand the Splunk Victoria experience. How it is useful in Splunk upgradation for Splunk cloud environment.   Thanks in Advance
in a multi site on premise Splunk version 9.0.0 environment if we have two sites do we have to designate a site value on our two Deployment servers   [general] serverName = DeploymentServer1 pass... See more...
in a multi site on premise Splunk version 9.0.0 environment if we have two sites do we have to designate a site value on our two Deployment servers   [general] serverName = DeploymentServer1 pass4SymmKey = $1$OtE23lksSRVW123jaPEzeaoq site=FirstSite
Hi,  In Splunk cloud, Can I restrict the log ingestion when the index capacity reaches its limit on per day basis? I have logs which is exceeding its indexing capacity on certain days. Is there a... See more...
Hi,  In Splunk cloud, Can I restrict the log ingestion when the index capacity reaches its limit on per day basis? I have logs which is exceeding its indexing capacity on certain days. Is there any way I can block ingestion if the capacity reaches its threshold? Also, I have another question, Is it possible for me to edit the configuration files to filter logs or send it null queue on the Splunk cloud? If I want to create custom app to do so. Please share me any related documents to follow. Thanks,  Mala Sundaramoorthy
Hi everyone,  I am new to Splunk and I am learning as I go. I'd like to know if anyone has any idea what I am doing wrong here because it is supposed to return 36 events but I am getting 36 events ... See more...
Hi everyone,  I am new to Splunk and I am learning as I go. I'd like to know if anyone has any idea what I am doing wrong here because it is supposed to return 36 events but I am getting 36 events but column 1 (FULLNAME) just keeps giving me more with empty columns for the rest. I just wished it would stop the FULLNAME column at 36.    index=....firstSearch.....CLOSEDATE="*" (TYPE=10 OR TYPE= 11) | rename ID as UNIQUE_ID | dedup PARENT UNIQUE_ID | eval CLOSEDATETIME=strptime(CLOSEDATE, "%Y-%m-%d %H:%M:%S") | eval from_date=relative_time(now(), "-10d" ) | eval to_date=relative_time(now(), "-3d" ) | where CLOSEDATETIME >= from_date AND CLOSEDATETIME <= to_date | fields PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | table PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | appendcols [search index= ...secondsearch... TYPE=0 | eval FULLNAME=FIRST." ".LAST] | fields FULLNAME PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | table FULLNAME PARENT UNIQUE_ID DESCRIPTION CLOSEDATE   Any help greatly appreciated. I am so stuck on this and I don't understand why column 1 (FULLNAME) just keeps giving me more than the necessary 36 events and keeps giving me more full names with blank parent numbers and all of the other columns (UniqueID, description, closedate) beyond 36 number of records (events). Eventually I will need to do another appendcols because I only need one column to append it to the overall table at the end. Is this a good approach? join are too costly and it is not giving me what I need. This is the closes thing that is working so far. Thank you and have a good day,   Diana
Hello SPLUNKERS   I have dashboard with multiple panels. On top the dashboard I have multiple dropdowns,time select and multi select inputs.If i want to select or change my drop-down selection af... See more...
Hello SPLUNKERS   I have dashboard with multiple panels. On top the dashboard I have multiple dropdowns,time select and multi select inputs.If i want to select or change my drop-down selection after reaching the bottom of my  my dashboard all the way to bottom .I need to scroll up and change it..Is there way to have the drop-down be fixed when scrolled and be visible and able to make changes Thanks in Advance
I've configured an inputs.conf to run a single .bat script:     [script://.\bin\scripts\prueba_py.bat] disabled = 0 _TCP_ROUTING = splunkcloud_prod index = ldcsap sourcetype = _json interval = ... See more...
I've configured an inputs.conf to run a single .bat script:     [script://.\bin\scripts\prueba_py.bat] disabled = 0 _TCP_ROUTING = splunkcloud_prod index = ldcsap sourcetype = _json interval = 0-59/5 * * * *     My batch script prueba_py.bat just execute a python script called prueba_py.py:     @echo off python.exe "C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\bin\scripts\prueba_py.py" exit /b 0     And finally my python script only creates a dictionary, convert it to json and print it:     import json person = {"name":"Denis","surname":"Soto","age":"34"} print(json.dumps(person)) exit(0)     Assuming the inputs.conf stanza, it should be executed every 5 minutes, using the TCP_ROUTING and indexing the data to "ldcsap" index. Well... that's not happening. I'm receiving the following INFO alert in splunkd.log, I cannot find the error. 07-20-2022 16:30:00.033 -0300 INFO ExecProcessor [6652 ExecProcessor] - setting reschedule_ms=299967, for command="C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\bin\scripts\prueba_py.bat"