All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| fieldformat StartTime = strftime(StartTime, "%F %T.%3N") | fieldformat EndTime = strftime(EndTime, "%F %T.%3N")
Unless you are running your search at exactly midnight, the last 7 days will be spread over 8 days. You need to use the relative option in the time picker and align to the start and end of days to ge... See more...
Unless you are running your search at exactly midnight, the last 7 days will be spread over 8 days. You need to use the relative option in the time picker and align to the start and end of days to get exactly 7 days worth of events  
Hi All, I have created below query: search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)" ... See more...
Hi All, I have created below query: search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)" | transaction TRIM_ID startswith="Reading Control-File /absin/TRIM.CNXCTR." endswith="Completed Settlement file processing, TRIM.CNX." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as TRIM.CNX_Duration| table StartTime EndTime TRIM.CNX_Duration| sort +StartTime +EndTime]| fieldformat ProcessingStartTime = strftime(ProcessingStartTime, "%F %T.%3N")| fieldformat ProcessingEndTime = strftime(ProcessingEndTime, "%F %T.%3N")| table starttime EndTime I am not getting the correct time I am getting in below format: start time - 1697809010.604 EndTime - 1697809075.170 I want it in this format: StartTime - 2023-10-20 02:16:56.629 EndTime - 2023-10-20 02:19:57.554 Can someone help me here.  
2a. You are right. In this case Indexers ingesting logs via couple of tcp ports. We have load balancer that smears logs on all indexers. You can advertise me some suggestions for architecture if... See more...
2a. You are right. In this case Indexers ingesting logs via couple of tcp ports. We have load balancer that smears logs on all indexers. You can advertise me some suggestions for architecture if want. I wiil glad to see some good advises When I was creating our Splunk env I decide that it is the most convenient way in our production. It's allow me to easily route events to indexes only by port. 2b. Yes, I'm talking about applying the bundle. I'm sure configuration was applied because I always use "splunk apply cluster-bundle -auth password --answer-yes" and wait for nodes to reboot if they decide to do it.
Same issue. +0000 ERROR ModularInputs [18816 TcpChannelThread] - Argument validation for scheme=proofpoint_tap_siem: killing process, because executing it took too long (over 30000 msecs). For ... See more...
Same issue. +0000 ERROR ModularInputs [18816 TcpChannelThread] - Argument validation for scheme=proofpoint_tap_siem: killing process, because executing it took too long (over 30000 msecs). For me , i saw this was an OS issue. On Ubuntu the input works, the Redhat boxes dont so .. 
@ITWhisperer  I tried below query index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event ... See more...
@ITWhisperer  I tried below query index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True="✔" | bin _time span=1d | dedup _time | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True When I am selecting last 7 days its showing 8 events instead of 7  
Looks like a defect to me
At the end of the search query, i.e. after the sort command
@ahmad1950 - I have not tested it specifically. But I think you should be able to use all the features of Python as you use external Python.   I hope this helps!!!
Given the initial search has the same criteria as the searchmatch, True will always be a tick, so you just need to dedup the days index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-... See more...
Given the initial search has the same criteria as the searchmatch, True will always be a tick, so you just need to dedup the days index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True="✔" | bin _time span=1d | dedup _time | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True
Thanks, I did the change, I get the value below but it isn't in json format ? each value still in the same field (like @idfacture; @idfactureABC; @routename)   Regards,  
Thanks for the great explanation.  The new screenshot is clearer and shows that I had used "j" instead of "J" in my regex.  Please try this | rex mode=sed "s/rawJson=//" | eval _raw=trim(_raw, "\"")
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9... See more...
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9N or %9Q: https://docs.splunk.com/Documentation/Splunk/9.0.5/SearchReference/Commontimeformatvariables When I try to parse a timestamp with nanosecond granularity, however, it stops at microseconds and calculates the delta in microseconds as well.  My expectation is that Splunk should maintain and manage nanoseconds. Here is a run anywhere:       | makeresults | eval start = "2023-10-24T18:09:24.900883123" | eval end = "2023-10-24T18:09:24.902185512" | eval start_epoch = strptime(start,"%Y-%m-%dT%H:%M:%S.%9N") | eval end_epoch = strptime(end,"%Y-%m-%dT%H:%M:%S.%9N") | table start end start* end* | eval delta = end_epoch - start_epoch | eval delta_round = round(end_epoch - start_epoch,9)       Is this a defect or am I doing something wrong? Thank you! Andrew
Fair point.  My goal is to break lines without regard for line ends since Splunk appears to be ignoring some of them. Try LINE_BREAKER = ()<\d+>20\d\d
Thanks, Can you be more precise where do we need to paste this in the xml source code of dashboard. Thanks
Are you sure that this line breaker will be good in my situation? As you can see in the last screenshot, the event contains "... High: <0>, Low: <0>" and I suspect that this breaker will cut events ... See more...
Are you sure that this line breaker will be good in my situation? As you can see in the last screenshot, the event contains "... High: <0>, Low: <0>" and I suspect that this breaker will cut events in unexpected places.
OK. We're getting somewhere. 2a. You have direct network inputs on indexers? That's not the best idea and calls for some re-architecting. But that shouldn't be the reason for problems with line b... See more...
OK. We're getting somewhere. 2a. You have direct network inputs on indexers? That's not the best idea and calls for some re-architecting. But that shouldn't be the reason for problems with line breaking. 2b. What do you mean by "apply props.conf"? Do you push configuration bundle to the cluster from the CM or just define props.conf on the CM and just let it be? If you're pushing the configs, did you verify the effective configs on the indexer(s) receiving the events?
1. I suggested that for some reasons events don't contain a whole pattern and tried to check only "\r". "\r" works on the regex101. Now I changed this option to the default "([\r\n]+)". 2. I'm ge... See more...
1. I suggested that for some reasons events don't contain a whole pattern and tried to check only "\r". "\r" works on the regex101. Now I changed this option to the default "([\r\n]+)". 2. I'm getting these events via Syslog. Logs come to the Indexer layer, where I apply props.conf via the Manager node, then I search on the Searchhead layer. 3. Yeah, I understand that I need to wait while indexers apply configuration and then search only events that came to Splunk after.
Hello, Thank you for your help, I appreciate. I m trying to explain what I want  1. We send json logs to a Mysql DB from an application server -> this is the logs format from the application serv... See more...
Hello, Thank you for your help, I appreciate. I m trying to explain what I want  1. We send json logs to a Mysql DB from an application server -> this is the logs format from the application server -->  {"bam":{"facture":{"@idFFFFF":"","@idBBBBB":"","@idCCCCC":"","@idCCCCC":"","@ABCACB":"","@status":""},"Contact":{"@idContact":"","@nom":"","@prenom":"","@adresse":"","@typeContact":""},"service":{"@jobName":"XX_Abcdef_Abccc_Token_V1","@jobVersion":"x.x","@routeName":"","@routeVersion":"","@currentTime":"2023-07-03 13:00:28","@idCorrelation":"545454ssss-abcc-456ss-5454-444455555554444","@serviceDuration":"1140"}}} If I copy this ligne on notepad and manually import it on splunk I get want I want to have (I used the default source type) Each value is extracted so it's perfect   2. To automatiquely get the new logs from the DB server I decided to use Splunk DB connect ( maybe it's not the best choice ? ) So I configured a new input in the Splunk DB connect to get the value from the DB table    But now the data are not indexed on json format as shown below   How can I get these datas on json format as shown on the first and second capture ?  Hope iyou understand better what I m trying to do    Regards,      
I am trying to setup a dashboard which gives me details like user's current concurrency settings & roles utilization , if someone has implemented this kind of dashboard please help