All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thank you @ITWhisperer  Above solution worked
Hi @law175 , have you some message from Splunk? the behavior you described is the one of violation: this occurs if you index more logs of your daily quota for more than 2 times in 30 solar days on ... See more...
Hi @law175 , have you some message from Splunk? the behavior you described is the one of violation: this occurs if you index more logs of your daily quota for more than 2 times in 30 solar days on a Trial License or more than 45 times in 60 solar days for a Term License. Check in the [Settings > License] page. If you're not in violation there could be another situation: are you using the index in your main search? In other words, try to add index=your_index or index=* at the beginning of your search because your index isn't in the default search path. Third possibility: have you the rights to read data from that index? Ciao. Giuseppe
Hi @gjhaaland, after a timechart command yo have only as columns the count and the values of the src_sg_info field. user1, user2 atc... are values of src_sg_info field? if they are values of  src... See more...
Hi @gjhaaland, after a timechart command yo have only as columns the count and the values of the src_sg_info field. user1, user2 atc... are values of src_sg_info field? if they are values of  src_sg_info you have to change the values before the timestamp using eval not rename (rename is to change te name of a field): index=asa host=1.2.3.4 src_sg_info=* | eval src_sg_info=case(src_sg_info="user1","David E",src_sg_info="user2","Mary E",src_sg_info="user3","Lucy E") | timechart span=10m dc(src_sg_info) by src_sg_info Ciao. Giuseppe
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" ... See more...
Hi, Code is following index=asa host=1.2.3.4 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E" | rename user2 as "Mary E" | rename user3 as "Lucy E" If number of user is 0, then we know theres is no VPN user at all. Plan is to print it out together with active VPN user in timechart if possible. Try to explain how it looks below.                                                                                                             user2                                                                                                           user3                No Vpn user                                                                                                    No VPN user time .....................................................................................................................................................................................................................................    
Hi @m_pham, yes I tried but this information remains always the same In addition I found that sometimes the vaues for te Search Heads are displayed as "N.A." and I notices that sometimes (not alway... See more...
Hi @m_pham, yes I tried but this information remains always the same In addition I found that sometimes the vaues for te Search Heads are displayed as "N.A." and I notices that sometimes (not always), forcing Apply Configuration, some of the values in the Summary dashboard are dispayed, but not always and not all. Splunk Support said to me that the second one it's a known bug that will be solved in 9.1.2. Ciao. Giuseppe
We are having issus with Data models from Splunk_SA_CIM running for a very long time (hitting the limit) and causing out of memory (OOM) issues on our indexers. We have got brand new physical servers... See more...
We are having issus with Data models from Splunk_SA_CIM running for a very long time (hitting the limit) and causing out of memory (OOM) issues on our indexers. We have got brand new physical servers with 128 GB RAM and 48 Cores. The Enterprise security search head cluster has data models enabled which are both running on old and new hardware. Though we are getting OOM on new hardware and every run hits our 30+ min limit. Example on configuration for auth DMA: allow_old_summaries = true allow_skew = 5% backfill_time = -1d cron_schedule = */5 * * * * earliest_time = -6mon hunk.compression_codec = - hunk.dfs_block_size = 0 hunk.file_format = - manual_rebuilds = true max_concurrent = 1 max_time = 1800 Any tips on troubleshooting data models running for a very long time and causing out of memory (OOM)? Thanks!
Hi @cross521, yes the Use Case you describe it's possible and easy to create. I suppose that you already ingested data and stored them in an index using a sourcetype (item 1). I suppose also that ... See more...
Hi @cross521, yes the Use Case you describe it's possible and easy to create. I suppose that you already ingested data and stored them in an index using a sourcetype (item 1). I suppose also that you already extracted fields associated  to that sourcetype (item 2), if not please share a sample of your logs. For the item 3, I need to know how to identify failures, in the following example I use the rule that if there's a failure, "status" field has the value "failure", and you have to define the fields to add in the results A the end, you can download the csv from the GUI or use the outputcsv command (at the end of the search) that saves the csv in $SPLUNK_HOME/var/run/splunk/csv, it isn't possible to use a different location for te saving folder, if you want a different one, you have to create a custom script to move this file. index=your_index status =failure | table _time host field1 field2 | outputcsv your_csv.cv if there are different conditions you can modify my search. Ciao. Giuseppe
how can I dedup "owner group" field without disturbing other fields in table. my query: | eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_ti... See more...
how can I dedup "owner group" field without disturbing other fields in table. my query: | eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_time_submitted <= time_period_epoc |join max=0 type=left current_ticket_state [|inputlookup monthly_status_state_mapping.csv|rename Status as current_ticket_state "Ageing Lookup" as state|table current_ticket_state state] |eval age= Final_TAT_days |eval total_age=round(age,2) |rangemap field=total_age "0-10days"=0-11 "11-20 Days"=11.01-20.00 "21-30 Days"=20.01-30 "31-40 Days"=30.01-40 "41-50 Days"=40.01-50 "51-60 Days"=50.01-60 "61-70 Days"=60.01-70 "71-80 Days"=70.01-80 "81-90 Days"=80.01-90 "91-100 Days"=90.01-100 ">100 Days"=100.01-1000 | stats count by work_queue state range | eval combined=work_queue."|".state | chart max(count) by combined range | eval work_queue=mvindex(split(combined,"|"),0) | eval state=mvindex(split(combined,"|"),1) | fields - combined |table work_queue state "11-20 Days" "21-30 Days" "31-40 Days" "41-50 Days" "51-60 Days" "61-70 Days" "71-80 Days" "81-90 Days" "91-100 Days" ">100 Days" |rename work_queue as "Owner Group" | fillnull value=0 |addtotals  
So a single event with two timestamps is easy | rex "time1=(?<t1>[^Z]*Z)" | rex "time2=(?<t2>[^Z]*Z)" | eval t1_t=strptime(t1, "%FT%TZ") | eval t2_t=strptime(t2, "%FT%TZ") | eval diff=t2_t-t1_t You... See more...
So a single event with two timestamps is easy | rex "time1=(?<t1>[^Z]*Z)" | rex "time2=(?<t2>[^Z]*Z)" | eval t1_t=strptime(t1, "%FT%TZ") | eval t2_t=strptime(t2, "%FT%TZ") | eval diff=t2_t-t1_t You can optimise that to make a single rex if you can guarantee the ordering of your _raw data
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open ... See more...
Hello, I received the following error, the issue resolved itself after 4 hours.  The CSV file size is 54 MB.  Streamed search execute failed because: Error in 'lookup' command: Failed to re-open lookup file: 'opt/splunk/var/run/searchpeers/[random number]/apps/[app-name]/lookups/test.csv' I am aware that there already a post in regards this, but I have more questions 1)  What is the cause of this error?     Is it because of the bug like mentioned in the old post below?  I am running 9.0.4, the bug should have been fixed https://community.splunk.com/t5/Splunk-Enterprise/Message-quot-Streamed-search-execute-failed-because-Error-in/m-p/569878 2) a) Is it because max_memtable_bytes in limits.conf  is 25MB? https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Limitsconf b) How do I check limit.conf via GUI without admin role? c)  What does "Lookup files with size above max_memtable_bytes will be indexed on disk" mean?      Is it a good thing or bad? d) If I see cs.index.alive file auto generated, does it mean it's an indexed on disk? [random number]/apps/[app-name]/lookups/test.csv [random number]/apps/[app-name]/lookups/test.csv_[random number].cs.index.alive 3)  If I am not allowed to change any setting (increase 25MB limit),         what is the solution for this issue? I appreciate your help. Thank you
Good questions. You can't get the sorted order inside eventstats. The list(Score) will put them in the order they are found, so they will not be sorted, and mvsort will not sort numerically, so cann... See more...
Good questions. You can't get the sorted order inside eventstats. The list(Score) will put them in the order they are found, so they will not be sorted, and mvsort will not sort numerically, so cannot be used, so the mvfind will not get the correct position.  The other issue with list(Score) is that it can only cope with 100 values, so it will fail at that point. As to whether there is an alternate solution, the following is probably a better option as it does not have the limitations of list() and does not require mvfind. It may be more efficient. | makeresults count=10 | fields - _time | streamstats c as Score | eval Student="Student ".(11 - Score) | table Student Score ``` Above simulates your data ``` ``` Generate list of scores and find position in results ``` | sort Score | streamstats count as pos | eventstats count ``` Now calculate ranks ``` | eval Rank_Inc=round((pos-1)/(count-1)*100, 0) | eval Rank_Exc=round((pos+0)/(count+1)*100, 0) | fields - Scores pos count You still have to sort the scores and it uses streamstats to identify position (rather than mvfind). I think there may be a difference in behaviour when there are multiple students with the same score. Using mvfind would always find the position as the first instance of that score, whereas using streamstats as above it would use the user's position. However, you could probably solve that issue.
Hello, I tried you suggestion and it was working. I accepted this as a solution. I have few questions: 1) Is there a way to move "sort" command into eventstats, so we don't have 2 lines? 2) Is it... See more...
Hello, I tried you suggestion and it was working. I accepted this as a solution. I have few questions: 1) Is there a way to move "sort" command into eventstats, so we don't have 2 lines? 2) Is it possible to do this calculation without using mvfind? Thank you so much
Hi @Peterm1993 .. Please add karma  / upvote the reply which helped you.. thanks. 
figured it out. thanks for your help. index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where OrientationError>0 | table _time OrderId OrientationError * | bin _time span=1h | ev... See more...
figured it out. thanks for your help. index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where OrientationError>0 | table _time OrderId OrientationError * | bin _time span=1h | eval _time=strftime(_time,"%dt%H") | chart sum(OrientationError) as ErrorFrequency over Location by _time useother=f limit=200 | addtotals | sort 0 - Total _time | fields - TOTAL   was what i was looking for!
Hi @Peterm1993 .. As Rich suggested, the bin command should be adjusted to hour and then the strftime command should be edited from "%d" to "%H" (if this %H does not work, then, pls copy paste a samp... See more...
Hi @Peterm1993 .. As Rich suggested, the bin command should be adjusted to hour and then the strftime command should be edited from "%d" to "%H" (if this %H does not work, then, pls copy paste a sample event's _time value... we should double check how the hours looks.. (is it 12 hrs or is it 24 hrs)) Please try this Search Query.. thanks.  index=analyst reporttype=DepTrayCaseQty Location=DEP/AutoDep* | where Dimension>0 OR ProtrusionError>0 OR OffCentreError>0 | table _time OrderId ProtrusionError OffCentreError Dimension * | bin _time span=1h | eval _time=strftime(_time,"%H") | eval foo=ProtrusionError+OffCentreError+Dimension | chart sum(foo) as ErrorFrequency over Location by _time useother=f limit=100 | addtotals | sort 0 - Total _time | fields - TOTAL  
hi @inventsekar I'm trying to convert the results from a daily result to a hourly breakdown so instead of for example and apologies cause I'm very new to splunk 9/11/23 165 errors it would be 1a... See more...
hi @inventsekar I'm trying to convert the results from a daily result to a hourly breakdown so instead of for example and apologies cause I'm very new to splunk 9/11/23 165 errors it would be 1am-2am12 errors 2am-3am 35 errors 3am-4am 12 errors totaling to 165 errors 
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the... See more...
Hi Guys, I am performing a POC to import our parquet files into splunk, i have manage to write a python script to extract out the events aka raw logs to a df.  I also did a python script to pump the logs via the syslog protocol to HF than to indexer. I am using the syslog method because i got many log type and i can do this by using the [udp://portnumber] to ingest multiple types of logs at once and to a different sourcetype however when i do this I am not able to retain the original datatime on the raw event but it is taking the datetime on the point i was sending the event. secondly i am using python because all these parquet files are storing in a s3 container hence it will be easier for me to loop thru the directory and extract the file.  I was hoping if anyone can help me out how can i get the original timestamp of the logs? Or there are other more effective way of doing this? sample logs from splunk after index, - Nov 10 09:45:50 127.0.0.1 <190>2023-09-01T16:59:12Z server1 server2 %NGIPS-6-430002: DeviceUUID: xxx-xxx-xxx heres my code to push the event via syslog.  import logging import logging.handlers import socket from IPython.display import clear_output #Create you logger. Please note that this logger is different from ArcSight logger. #my_loggerudp = logging.getLogger('MyLoggerUDP') #my_loggertcp = logging.getLogger('MyLoggerTCP') #We will pass the message as INFO my_loggerudp.setLevel(logging.INFO) #Define SyslogHandler #TCP #handlertcp = logging.handlers.SysLogHandler(address = ('localhost',1026), socktype=socket.SOCK_STREAM) #UDP handlerudp = logging.handlers.SysLogHandler(address = ('localhost',1025), socktype=socket.SOCK_DGRAM) #X.X.X.X =IP Address of the Syslog Collector(Connector Appliance,Loggers etc.) #514 = Syslog port , You need to specify the port which you have defined ,by default it is 514 for Syslog) my_loggerudp.addHandler(handlerudp) #my_loggertcp.addHandler(handlertcp) #Example: We will pass values from a List event = df["event"] count = len(event) #for x in range(2): for x in event: clear_output (wait=True) my_loggerudp.info(x) my_loggerudp.handlers[0].flush() count -= 1 print(f"logs left to be transmit {count}") print (x)  
@diptij wrote: Thank-you! Does that mean if I set '* hard maxlogins 10', splunk will operate correctly? Sure, Yes, Splunk will work just fine. Splunk will not complain anything about maxlogi... See more...
@diptij wrote: Thank-you! Does that mean if I set '* hard maxlogins 10', splunk will operate correctly? Sure, Yes, Splunk will work just fine. Splunk will not complain anything about maxlogins.  Upvotes are appreciated, if the query is solved, please accept it as solution, thanks. 
Please explain this use case more.  You say you're looking for matches, but the example output contains 4 unique results.  What is expected to match in that?  Please provide a sample match.
Change the bin command to set the desired interval.  Then adjust the strftime function. | bin _time span=1h | eval _time=strftime(_time,"%H")