All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @cross521, yes the Use Case you describe it's possible and easy to create. I suppose that you already ingested data and stored them in an index using a sourcetype (item 1). I suppose also that ... See more...
Hi @cross521, yes the Use Case you describe it's possible and easy to create. I suppose that you already ingested data and stored them in an index using a sourcetype (item 1). I suppose also that you already extracted fields associated  to that sourcetype (item 2), if not please share a sample of your logs. For the item 3, I need to know how to identify failures, in the following example I use the rule that if there's a failure, "status" field has the value "failure", and you have to define the fields to add in the results A the end, you can download the csv from the GUI or use the outputcsv command (at the end of the search) that saves the csv in $SPLUNK_HOME/var/run/splunk/csv, it isn't possible to use a different location for te saving folder, if you want a different one, you have to create a custom script to move this file. index=your_index status =failure | table _time host field1 field2 | outputcsv your_csv.cv if there are different conditions you can modify my search. Ciao. Giuseppe
how can I dedup "owner group" field without disturbing other fields in table. my query: | eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_ti... See more...
how can I dedup "owner group" field without disturbing other fields in table. my query: | eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_time_submitted <= time_period_epoc |join max=0 type=left current_ticket_state [|inputlookup monthly_status_state_mapping.csv|rename Status as current_ticket_state "Ageing Lookup" as state|table current_ticket_state state] |eval age= Final_TAT_days |eval total_age=round(age,2) |rangemap field=total_age "0-10days"=0-11 "11-20 Days"=11.01-20.00 "21-30 Days"=20.01-30 "31-40 Days"=30.01-40 "41-50 Days"=40.01-50 "51-60 Days"=50.01-60 "61-70 Days"=60.01-70 "71-80 Days"=70.01-80 "81-90 Days"=80.01-90 "91-100 Days"=90.01-100 ">100 Days"=100.01-1000 | stats count by work_queue state range | eval combined=work_queue."|".state | chart max(count) by combined range | eval work_queue=mvindex(split(combined,"|"),0) | eval state=mvindex(split(combined,"|"),1) | fields - combined |table work_queue state "11-20 Days" "21-30 Days" "31-40 Days" "41-50 Days" "51-60 Days" "61-70 Days" "71-80 Days" "81-90 Days" "91-100 Days" ">100 Days" |rename work_queue as "Owner Group" | fillnull value=0 |addtotals  
So a single event with two timestamps is easy | rex "time1=(?<t1>[^Z]*Z)" | rex "time2=(?<t2>[^Z]*Z)" | eval t1_t=strptime(t1, "%FT%TZ") | eval t2_t=strptime(t2, "%FT%TZ") | eval diff=t2_t-t1_t You... See more...
So a single event with two timestamps is easy | rex "time1=(?<t1>[^Z]*Z)" | rex "time2=(?<t2>[^Z]*Z)" | eval t1_t=strptime(t1, "%FT%TZ") | eval t2_t=strptime(t2, "%FT%TZ") | eval diff=t2_t-t1_t You can optimise that to make a single rex if you can guarantee the ordering of your _raw data
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open ... See more...
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open lookup file: 'opt/splunk/var/run/searchpeers/[random number]/apps/[app-name]/lookups/test.csv' I am aware that there already a post in regards this, but I have more questions 1)  What is the cause of this error?     Is it because of the bug like mentioned in the old post below?  I am running 9.0.4, the bug should have been fixed https://community.splunk.com/t5/Splunk-Enterprise/Message-quot-Streamed-search-execute-failed-because-Error-in/m-p/569878 2) a) Is it because max_memtable_bytes in limits.conf  is 25MB? https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Limitsconf b) How do I check limit.conf via GUI without admin role? c)  What does "Lookup files with size above max_memtable_bytes will be indexed on disk" mean?      Is it a good thing or bad? d) If I see cs.index.alive file auto generated, does it mean it's an indexed on disk? [random number]/apps/[app-name]/lookups/test.csv [random number]/apps/[app-name]/lookups/test.csv_[random number].cs.index.alive 3)  If I am not allowed to change any setting (increase 25MB limit),         what is the solution for this issue? I appreciate your help. Thank you
Good questions. You can't get the sorted order inside eventstats. The list(Score) will put them in the order they are found, so they will not be sorted, and mvsort will not sort numerically, so cann... See more...
Good questions. You can't get the sorted order inside eventstats. The list(Score) will put them in the order they are found, so they will not be sorted, and mvsort will not sort numerically, so cannot be used, so the mvfind will not get the correct position.  The other issue with list(Score) is that it can only cope with 100 values, so it will fail at that point. As to whether there is an alternate solution, the following is probably a better option as it does not have the limitations of list() and does not require mvfind. It may be more efficient. | makeresults count=10 | fields - _time | streamstats c as Score | eval Student="Student ".(11 - Score) | table Student Score ``` Above simulates your data ``` ``` Generate list of scores and find position in results ``` | sort Score | streamstats count as pos | eventstats count ``` Now calculate ranks ``` | eval Rank_Inc=round((pos-1)/(count-1)*100, 0) | eval Rank_Exc=round((pos+0)/(count+1)*100, 0) | fields - Scores pos count You still have to sort the scores and it uses streamstats to identify position (rather than mvfind). I think there may be a difference in behaviour when there are multiple students with the same score. Using mvfind would always find the position as the first instance of that score, whereas using streamstats as above it would use the user's position. However, you could probably solve that issue.
Hello, I tried you suggestion and it was working. I accepted this as a solution. I have few questions: 1) Is there a way to move "sort" command into eventstats, so we don't have 2 lines? 2) Is it... See more...
Hello, I tried you suggestion and it was working. I accepted this as a solution. I have few questions: 1) Is there a way to move "sort" command into eventstats, so we don't have 2 lines? 2) Is it possible to do this calculation without using mvfind? Thank you so much
Hi @Peterm1993 .. Please add karma  / upvote the reply which helped you.. thanks. 
figured it out. thanks for your help. index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where OrientationError>0 | table _time OrderId OrientationError * | bin _time span=1h | ev... See more...
figured it out. thanks for your help. index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where OrientationError>0 | table _time OrderId OrientationError * | bin _time span=1h | eval _time=strftime(_time,"%dt%H") | chart sum(OrientationError) as ErrorFrequency over Location by _time useother=f limit=200 | addtotals | sort 0 - Total _time | fields - TOTAL   was what i was looking for!
Hi @Peterm1993 .. As Rich suggested, the bin command should be adjusted to hour and then the strftime command should be edited from "%d" to "%H" (if this %H does not work, then, pls copy paste a samp... See more...
Hi @Peterm1993 .. As Rich suggested, the bin command should be adjusted to hour and then the strftime command should be edited from "%d" to "%H" (if this %H does not work, then, pls copy paste a sample event's _time value... we should double check how the hours looks.. (is it 12 hrs or is it 24 hrs)) Please try this Search Query.. thanks.  index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1h | eval _time=strftime(_time,"%H") | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL  
hi @inventsekar I'm trying to convert the results from a daily result to a hourly breakdown so instead of for example and apologies cause I'm very new to splunk 9/11/23 165 errors it would be 1a... See more...
hi @inventsekar I'm trying to convert the results from a daily result to a hourly breakdown so instead of for example and apologies cause I'm very new to splunk 9/11/23 165 errors it would be 1am-2am12 errors 2am-3am 35 errors 3am-4am 12 errors totaling to 165 errors 
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the... See more...
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the logs via the syslog protocol to HF than to indexer. I am using the syslog method because i got many log type and i can do this by using the [udp://portnumber] to ingest multiple types of logs at once and to a different sourcetype however when i do this I am not able to retain the original datatime on the raw event but it is taking the datetime on the point i was sending the event. secondly i am using python because all these parquet files are storing in a s3 container hence it will be easier for me to loop thru the directory and extract the file.  I was hoping if anyone can help me out how can i get the original timestamp of the logs? Or there are other more effective way of doing this? sample logs from splunk after index, - Nov 10 09:45:50 127.0.0.1 <190>2023-09-01T16:59:12Z server1 server2 %NGIPS-6-430002: DeviceUUID: xxx-xxx-xxx heres my code to push the event via syslog.  import logging import logging.handlers import socket from IPython.display import clear_output #Create you logger. Please note that this logger is different from ArcSight logger. #my_loggerudp = logging.getLogger('MyLoggerUDP') #my_loggertcp = logging.getLogger('MyLoggerTCP') #We will pass the message as INFO my_loggerudp.setLevel(logging.INFO) #Define SyslogHandler #TCP #handlertcp = logging.handlers.SysLogHandler(address = ('localhost',1026), socktype=socket.SOCK_STREAM) #UDP handlerudp = logging.handlers.SysLogHandler(address = ('localhost',1025), socktype=socket.SOCK_DGRAM) #X.X.X.X =IP Address of the Syslog Collector(Connector Appliance,Loggers etc.) #514 = Syslog port , You need to specify the port which you have defined ,by default it is 514 for Syslog) my_loggerudp.addHandler(handlerudp) #my_loggertcp.addHandler(handlertcp) #Example: We will pass values from a List event = df["event"] count = len(event) #for x in range(2): for x in event: clear_output (wait=True) my_loggerudp.info(x) my_loggerudp.handlers[0].flush() count -= 1 print(f"logs left to be transmit {count}") print (x)  
@diptij wrote: Thank-you! Does that mean if I set '* hard maxlogins 10', splunk will operate correctly? Sure, Yes, Splunk will work just fine. Splunk will not complain anything about maxlogi... See more...
@diptij wrote: Thank-you! Does that mean if I set '* hard maxlogins 10', splunk will operate correctly? Sure, Yes, Splunk will work just fine. Splunk will not complain anything about maxlogins.  Upvotes are appreciated, if the query is solved, please accept it as solution, thanks. 
Please explain this use case more.  You say you're looking for matches, but the example output contains 4 unique results.  What is expected to match in that?  Please provide a sample match.
Change the bin command to set the desired interval.  Then adjust the strftime function. | bin _time span=1h | eval _time=strftime(_time,"%H")  
Hi @Peterm1993 .. do you mean, you want to convert number of days to number of hours (days divided by 24) .. OR.. when you are using that strftime, instead of picking up the days(%d), you want to pic... See more...
Hi @Peterm1993 .. do you mean, you want to convert number of days to number of hours (days divided by 24) .. OR.. when you are using that strftime, instead of picking up the days(%d), you want to pick up the hours... please confirm.. thanks.  index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1d | eval Total_time=strftime(_time,"%d") ```Comment - looks like you miss-typed the "Total_time" as "_time"``` | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL  
This is my code about the drop down <input type="dropdown" token="start_time" searchWhenChanged="true"> <label>First IR init Time (sec)</label> <fieldForLabel>start_time</fieldForLabel> ... See more...
This is my code about the drop down <input type="dropdown" token="start_time" searchWhenChanged="true"> <label>First IR init Time (sec)</label> <fieldForLabel>start_time</fieldForLabel> <fieldForValue>start_time</fieldForValue> <search> <query>index=idx_ptd_dataset sourcetype="type:ptd_dataset:data" corp="flight" | where !isnull(location) | where !isnull(landing_time) | eval st_time= round(landing_time,0) | where st_time &lt;=90 | stats values by st_time | sort st_time</query> </search> <default>ALL</default> <choice value="ALL">ALL</choice> </input>      
Hi @bowesmana .. just thought to tell you, the rex was missing the closing parenthesis, and your rex and strptime works nicely.  Hi @djoobbani .. as said on the above reply, pls update us.. the ti... See more...
Hi @bowesmana .. just thought to tell you, the rex was missing the closing parenthesis, and your rex and strptime works nicely.  Hi @djoobbani .. as said on the above reply, pls update us.. the time field is "_time" or you want to extract it from the msg.  if you want to extract from the msg, then, assuming the 2nd msg also same like 1st msg.. try something like this..  | makeresults | eval msg1="some message dfsdfdfgfdggfg fgdfdgfdg \"time\":\"2023-11-09T21:33:05.0738373837278Z, abcefg" | eval msg2="some message dfsdfdfgfdggfg fgdfdgfdg \"time\":\"2023-11-09T21:33:10.0738373837278Z, abcefg" | rex field=msg1 "time.:.(?<event1_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | rex field=msg2 "time.:.(?<event2_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})" | eval e1_t=strptime(event1_time, "%FT%T") | eval e2_t=strptime(event2_time, "%FT%T") | eval diff=e1_t-e2_t | table event1_time event2_time diff this gives the result of: event1_time event2_time diff 2023-11-09T21:33:05 2023-11-09T21:33:10 -5.000000  
Thanks @bowesmana  Ok let's make it simpler, i have this event: Event source=abc time1=2023-11-10T00:33:53Z time2=2023-11-11T12:33:53Z How would you construct the query so that time2 is subtrac... See more...
Thanks @bowesmana  Ok let's make it simpler, i have this event: Event source=abc time1=2023-11-10T00:33:53Z time2=2023-11-11T12:33:53Z How would you construct the query so that time2 is subtracted from time1 and display the time difference in the result using rex? Thanks!  
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR... See more...
Hi im trying to convert this search to show totals in hours instead of days/dates can anyone help me please? index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1d | eval _time=strftime(_time,"%d") | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL
Thank you for answer.  Here is an example where I would like to process data: 1. There are 3 years of data accumulated every 2 seconds. 2. The value of a particular point is always 0 and only beco... See more...
Thank you for answer.  Here is an example where I would like to process data: 1. There are 3 years of data accumulated every 2 seconds. 2. The value of a particular point is always 0 and only becomes 1 or more when a failure occurs. 3. I would like to retrieve the records of any failures over a period of 3 years, i.e. spikes in the data, and save them as csv format. Can you help me one more time?