All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello community I am trying to combine two different things and cannot figure out how. I am looking at a certain action and counting how many times this is observed per IP address and day. Then I’m ... See more...
Hello community I am trying to combine two different things and cannot figure out how. I am looking at a certain action and counting how many times this is observed per IP address and day. Then I’m plotting per IP by day to try to find recurring events based on IP address. I got this far: <base-search> earliest="-7d@d" latest="@d" | chart count over ip by date_wday Which does exactly what is intended, tough the details are lost as there are a lot if single events per IP address. So, I’d like to filter out any IP address with only 1 event during the period (here one week). This works fine for filtering: | stats count as sum by ip | search summa > 1 But then I loose the details needed for the chart part. I figured maybe I could use eval to filter out based on total count but could not put together anything which worked. Even when I tried to combine stats and eval I either failed or ended up with something which could not be presented graphically. Any suggestions are more than appreciated Best regards // G
Hi All, I am trying to create an efficient way to pull out certain win events for my report but I am not sure it would return the results I want. It truncates some of the results. I might be doing s... See more...
Hi All, I am trying to create an efficient way to pull out certain win events for my report but I am not sure it would return the results I want. It truncates some of the results. I might be doing something wrong. Please see the code that I am currently running and suggest an improvement. Thank you all!   index=mbda_windows_server sourcetype=XmlWinEventLog EventCode=4718 OR 4728 OR 4729 OR 4730 OR 4732 OR 4733 OR 4756 OR 4757 OR 4762 OR 4796 OR 5136 | dedup src_user, MemberSid, Group_Domain, Group_Name, host, _time  | convert timeformat="%d/%m/%Y %H:%M" ctime(_time) | rename src_user AS Login, MemberSid AS Account, Group_Domain AS Domain, Group_Name AS Group, host AS Host, _time AS Min_NormDateMin, name AS EventName | table Login, Account, Domain, Group, Host, Min_NormDateMin, EventCode, EventName | sort EventCode
How many values are allowed in an IN clause which is part of where clause? I want to read 277 values to be precise. index=abc sourcetype="ccinfrap.dc_pqr_jws:app" "[SubscriptionService] Consu... See more...
How many values are allowed in an IN clause which is part of where clause? I want to read 277 values to be precise. index=abc sourcetype="ccinfrap.dc_pqr_jws:app" "[SubscriptionService] Consumer sent message" "Not Predicted to Finish" | rex mode=sed "s/^.*message {/{/" | rex mode=sed "s/\n}.*/\n}/" | spath | fillnull jobStreamName value="BLANK" | where jobStreamName IN( "stream1" ,"stream2" ,"stream3" . . ,"stream277" )
Please help to understand the logic of below query  eval count=if(isnull(count), -1,count)
Hello Splunker!! Can you please help me out to understand the Splunk Victoria experience. How it is useful in Splunk upgradation for Splunk cloud environment.   Thanks in Advance
in a multi site on premise Splunk version 9.0.0 environment if we have two sites do we have to designate a site value on our two Deployment servers   [general] serverName = DeploymentServer1 pass... See more...
in a multi site on premise Splunk version 9.0.0 environment if we have two sites do we have to designate a site value on our two Deployment servers   [general] serverName = DeploymentServer1 pass4SymmKey = $1$OtE23lksSRVW123jaPEzeaoq site=FirstSite
Hi,  In Splunk cloud, Can I restrict the log ingestion when the index capacity reaches its limit on per day basis? I have logs which is exceeding its indexing capacity on certain days. Is there a... See more...
Hi,  In Splunk cloud, Can I restrict the log ingestion when the index capacity reaches its limit on per day basis? I have logs which is exceeding its indexing capacity on certain days. Is there any way I can block ingestion if the capacity reaches its threshold? Also, I have another question, Is it possible for me to edit the configuration files to filter logs or send it null queue on the Splunk cloud? If I want to create custom app to do so. Please share me any related documents to follow. Thanks,  Mala Sundaramoorthy
Hi everyone,  I am new to Splunk and I am learning as I go. I'd like to know if anyone has any idea what I am doing wrong here because it is supposed to return 36 events but I am getting 36 events ... See more...
Hi everyone,  I am new to Splunk and I am learning as I go. I'd like to know if anyone has any idea what I am doing wrong here because it is supposed to return 36 events but I am getting 36 events but column 1 (FULLNAME) just keeps giving me more with empty columns for the rest. I just wished it would stop the FULLNAME column at 36.    index=....firstSearch.....CLOSEDATE="*" (TYPE=10 OR TYPE= 11) | rename ID as UNIQUE_ID | dedup PARENT UNIQUE_ID | eval CLOSEDATETIME=strptime(CLOSEDATE, "%Y-%m-%d %H:%M:%S") | eval from_date=relative_time(now(), "-10d" ) | eval to_date=relative_time(now(), "-3d" ) | where CLOSEDATETIME >= from_date AND CLOSEDATETIME <= to_date | fields PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | table PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | appendcols [search index= ...secondsearch... TYPE=0 | eval FULLNAME=FIRST." ".LAST] | fields FULLNAME PARENT UNIQUE_ID DESCRIPTION CLOSEDATE | table FULLNAME PARENT UNIQUE_ID DESCRIPTION CLOSEDATE   Any help greatly appreciated. I am so stuck on this and I don't understand why column 1 (FULLNAME) just keeps giving me more than the necessary 36 events and keeps giving me more full names with blank parent numbers and all of the other columns (UniqueID, description, closedate) beyond 36 number of records (events). Eventually I will need to do another appendcols because I only need one column to append it to the overall table at the end. Is this a good approach? join are too costly and it is not giving me what I need. This is the closes thing that is working so far. Thank you and have a good day,   Diana
Hello SPLUNKERS   I have dashboard with multiple panels. On top the dashboard I have multiple dropdowns,time select and multi select inputs.If i want to select or change my drop-down selection af... See more...
Hello SPLUNKERS   I have dashboard with multiple panels. On top the dashboard I have multiple dropdowns,time select and multi select inputs.If i want to select or change my drop-down selection after reaching the bottom of my  my dashboard all the way to bottom .I need to scroll up and change it..Is there way to have the drop-down be fixed when scrolled and be visible and able to make changes Thanks in Advance
I've configured an inputs.conf to run a single .bat script:     [script://.\bin\scripts\prueba_py.bat] disabled = 0 _TCP_ROUTING = splunkcloud_prod index = ldcsap sourcetype = _json interval = ... See more...
I've configured an inputs.conf to run a single .bat script:     [script://.\bin\scripts\prueba_py.bat] disabled = 0 _TCP_ROUTING = splunkcloud_prod index = ldcsap sourcetype = _json interval = 0-59/5 * * * *     My batch script prueba_py.bat just execute a python script called prueba_py.py:     @echo off python.exe "C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\bin\scripts\prueba_py.py" exit /b 0     And finally my python script only creates a dictionary, convert it to json and print it:     import json person = {"name":"Denis","surname":"Soto","age":"34"} print(json.dumps(person)) exit(0)     Assuming the inputs.conf stanza, it should be executed every 5 minutes, using the TCP_ROUTING and indexing the data to "ldcsap" index. Well... that's not happening. I'm receiving the following INFO alert in splunkd.log, I cannot find the error. 07-20-2022 16:30:00.033 -0300 INFO ExecProcessor [6652 ExecProcessor] - setting reschedule_ms=299967, for command="C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\bin\scripts\prueba_py.bat"
Currently have a single stand-alone search head.  We did NOT install it as a single-member SHCluster.  At the time I didn't know that could be done.  We have 2 more servers coming so that we can crea... See more...
Currently have a single stand-alone search head.  We did NOT install it as a single-member SHCluster.  At the time I didn't know that could be done.  We have 2 more servers coming so that we can create a 3 member search head cluster.  I've been through a ton of docs but can't figure out the best way to accomplish this task.  Is there a way to take the existing stand-alone search head and turn it into a single member SHCluster?  So that I can easily then add the other 2 servers into the cluster?  I can't find a doc for that.  Or do I have to take 1 or the 2 new servers set it up as a single member cluster, then migrate all the app and settings from the existing stand alone search head  to the new single member cluster server and then add the other 2 servers? Appreciate in any insight anyone can give me.   Thanks
My actual query as all this data.   but after i use transpose  | sort by _time desc | eval mytime=strftime(_time, "%B %d %Y") | fields - _* | transpose header_field=mytime I only s... See more...
My actual query as all this data.   but after i use transpose  | sort by _time desc | eval mytime=strftime(_time, "%B %d %Y") | fields - _* | transpose header_field=mytime I only see the result for first 5 columns    How can i make transpose work for all more than 5days of data Also is there a way to generically format the color. Because the date changes. 
So I have a field (plugin_output)that has a paragraph of hardware info as one value. The only part of the value I'm concerned with is the "Computer SerialNumber". Is it possible to break this value d... See more...
So I have a field (plugin_output)that has a paragraph of hardware info as one value. The only part of the value I'm concerned with is the "Computer SerialNumber". Is it possible to break this value down into multiple values? I've tried field extraction with no luck, it may be possible to do a string search, but I would also need variables to account for the actual serial number value I want. <plugin_output> Computer Manufacturer : VMware, Inc. Computer Model : VMware7,1 Computer SerialNumber : VMware-65 6d 69 60 3b 89 2a a0-3b 4e bb 3f 2a 95 2f 49 Computer Type : Other Computer Physical CPU's : 2 Computer Logical CPU's : 4 CPU0 Architecture : x64 Physical Cores: 2 Logical Cores : 2 CPU1 Architecture : x64 Physical Cores: 2 Logical Cores : 2 Computer Memory : 8190 MB RAM slot #0 Form Factor: DIMM Type : DRAM Capacity : 8192 MB </plugin_output>
Hello Community, Is there a way to create a health rule based on Job Status.
Given a query   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm   I want to restrict the results based on another attribute like this   | mstats sum(ktm.lag_ms_count) as sum_c... See more...
Given a query   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm   I want to restrict the results based on another attribute like this   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm,ktm.lag_ms_mean > 120000   But this doesn't work. Possible to do this kind of filter in mstats? I've been able to do absolute filters   where index=ktm,cluster=app   But the ranged thing doesn't work
Hai Team,  we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own ... See more...
Hai Team,  we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own index   how is the props.conf and transform.conf on this requirement      
I have a basic SPL using mstat but I can't use treills with it? Any ideas why I can't select "severity"       | mstats count("mx.process.logs") as count WHERE "index"="murex_metrics" BY seve... See more...
I have a basic SPL using mstat but I can't use treills with it? Any ideas why I can't select "severity"       | mstats count("mx.process.logs") as count WHERE "index"="murex_metrics" BY severity            
Hello, we have issue reindexing archives as gz files even using crcSalt = <SOURCE> or crcSalt = REINDEXMPLEASE We CAN'T go on each UF and clean fishbucket.   UF (V7.1.4) linux splunkd.log : 07... See more...
Hello, we have issue reindexing archives as gz files even using crcSalt = <SOURCE> or crcSalt = REINDEXMPLEASE We CAN'T go on each UF and clean fishbucket.   UF (V7.1.4) linux splunkd.log : 07-19-2022 18:19:09.129 +0200 INFO ArchiveProcessor - Handling file=/var/log/MAJ-OS.log-20220601.gz 07-19-2022 18:19:09.130 +0200 INFO ArchiveProcessor - reading path=/var/log/MAJ-OS.log-20220601.gz (seek=0 len=1356) 07-19-2022 18:19:09.281 +0200 INFO ArchiveProcessor - Archive with path="/var/log/MAJ-OS.log-20220601.gz" was already indexed as a non-archive, skipping. 07-19-2022 18:19:09.281 +0200 INFO ArchiveProcessor - Finished processing file '/var/log/MAJ-OS.log-20220601.gz', removing from stats It also says "new tailer already processed path..." inputs.conf app from deployment-apps (V8.2.2) : [monitor:///var/log/MAJ-OS.log*] blacklist = archives disabled = false index = inf-servers sourcetype = MAJ-OS crcSalt = <SOURCE>   Thanks for your help.    
Hello, I am writing a search for a dashboard panel and it shows expected result when I use it as a search, but when added in Dashboard, the date time format is changed. This search gets all the r... See more...
Hello, I am writing a search for a dashboard panel and it shows expected result when I use it as a search, but when added in Dashboard, the date time format is changed. This search gets all the requesters with more than 25 tickets created in the last 13 months.     index="tickets" host="TICKET_DATA" source="D:\\Tickets_Data_Export.csv" "Department Name"="ABC" earliest=-14mon@mon | sort 0 -_time | foreach * [ eval newFieldName=replace("<<FIELD>>", "\s+", ""), {newFieldName}='<<FIELD>>' ] | fields - "* *", newFieldName | eval TicketCategory=if(isnull(TicketCategory),"Non-Chargeable",TicketCategory) | eval ID=substr(ID, len(ID)-5, 6) | dedup ID | eval tempDt=strptime(CreatedDate, "%Y-%m-%d %H:%M:%S") | eval YYMM=strftime(tempDt,"%Y-%m") | where tempDt>=relative_time(now(),"-13mon@mon") and tempDt<=relative_time(now(),"@mon") | chart count as Count over RequesterName by YYMM where Count in top13 | addtotals | where Total>=25 | sort 0 -Total       Outpout when run on Search Screen vs Output when added as a panel in Dashboard Studio is attached here. Column YYMM format is different in both cases - date time formatCan you please help how I can get the column names in YYYY-MM (and not full date n time format) in Dashboard Studio panel? Column YYMM displays YYYY-MM correctly when I select "Parallel Coordinates" visulization, but this is something not fit for my use case. Thank you.              
Am trying to access Crowdstrike Intel endpoint where oauth2 token is needed. When I test asset connectivity, I get below error message which I believe is due to the length of the token string. How do... See more...
Am trying to access Crowdstrike Intel endpoint where oauth2 token is needed. When I test asset connectivity, I get below error message which I believe is due to the length of the token string. How do I fix this error ? ERROR MESSAGE Using provided token to authenticate Got error: 401 2 actions failed handle_action exception occurred. Error string: ''access_token''