All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Looks like a defect to me
At the end of the search query, i.e. after the sort command
@ahmad1950 - I have not tested it specifically. But I think you should be able to use all the features of Python as you use external Python.   I hope this helps!!!
Given the initial search has the same criteria as the searchmatch, True will always be a tick, so you just need to dedup the days index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-... See more...
Given the initial search has the same criteria as the searchmatch, True will always be a tick, so you just need to dedup the days index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True="✔" | bin _time span=1d | dedup _time | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True
Thanks, I did the change, I get the value below but it isn't in json format ? each value still in the same field (like @idfacture; @idfactureABC; @routename)   Regards,  
Thanks for the great explanation.  The new screenshot is clearer and shows that I had used "j" instead of "J" in my regex.  Please try this | rex mode=sed "s/rawJson=//" | eval _raw=trim(_raw, "\"")
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9... See more...
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9N or %9Q: https://docs.splunk.com/Documentation/Splunk/9.0.5/SearchReference/Commontimeformatvariables When I try to parse a timestamp with nanosecond granularity, however, it stops at microseconds and calculates the delta in microseconds as well.  My expectation is that Splunk should maintain and manage nanoseconds. Here is a run anywhere:       | makeresults | eval start = "2023-10-24T18:09:24.900883123" | eval end = "2023-10-24T18:09:24.902185512" | eval start_epoch = strptime(start,"%Y-%m-%dT%H:%M:%S.%9N") | eval end_epoch = strptime(end,"%Y-%m-%dT%H:%M:%S.%9N") | table start end start* end* | eval delta = end_epoch - start_epoch | eval delta_round = round(end_epoch - start_epoch,9)       Is this a defect or am I doing something wrong? Thank you! Andrew
Fair point.  My goal is to break lines without regard for line ends since Splunk appears to be ignoring some of them. Try LINE_BREAKER = ()<\d+>20\d\d
Thanks, Can you be more precise where do we need to paste this in the xml source code of dashboard. Thanks
Are you sure that this line breaker will be good in my situation? As you can see in the last screenshot, the event contains "... High: <0>, Low: <0>" and I suspect that this breaker will cut events ... See more...
Are you sure that this line breaker will be good in my situation? As you can see in the last screenshot, the event contains "... High: <0>, Low: <0>" and I suspect that this breaker will cut events in unexpected places.
OK. We're getting somewhere. 2a. You have direct network inputs on indexers? That's not the best idea and calls for some re-architecting. But that shouldn't be the reason for problems with line b... See more...
OK. We're getting somewhere. 2a. You have direct network inputs on indexers? That's not the best idea and calls for some re-architecting. But that shouldn't be the reason for problems with line breaking. 2b. What do you mean by "apply props.conf"? Do you push configuration bundle to the cluster from the CM or just define props.conf on the CM and just let it be? If you're pushing the configs, did you verify the effective configs on the indexer(s) receiving the events?
1. I suggested that for some reasons events don't contain a whole pattern and tried to check only "\r". "\r" works on the regex101. Now I changed this option to the default "([\r\n]+)". 2. I'm ge... See more...
1. I suggested that for some reasons events don't contain a whole pattern and tried to check only "\r". "\r" works on the regex101. Now I changed this option to the default "([\r\n]+)". 2. I'm getting these events via Syslog. Logs come to the Indexer layer, where I apply props.conf via the Manager node, then I search on the Searchhead layer. 3. Yeah, I understand that I need to wait while indexers apply configuration and then search only events that came to Splunk after.
Hello, Thank you for your help, I appreciate. I m trying to explain what I want  1. We send json logs to a Mysql DB from an application server -> this is the logs format from the application serv... See more...
Hello, Thank you for your help, I appreciate. I m trying to explain what I want  1. We send json logs to a Mysql DB from an application server -> this is the logs format from the application server -->  {"bam":{"facture":{"@idFFFFF":"","@idBBBBB":"","@idCCCCC":"","@idCCCCC":"","@ABCACB":"","@status":""},"Contact":{"@idContact":"","@nom":"","@prenom":"","@adresse":"","@typeContact":""},"service":{"@jobName":"XX_Abcdef_Abccc_Token_V1","@jobVersion":"x.x","@routeName":"","@routeVersion":"","@currentTime":"2023-07-03 13:00:28","@idCorrelation":"545454ssss-abcc-456ss-5454-444455555554444","@serviceDuration":"1140"}}} If I copy this ligne on notepad and manually import it on splunk I get want I want to have (I used the default source type) Each value is extracted so it's perfect   2. To automatiquely get the new logs from the DB server I decided to use Splunk DB connect ( maybe it's not the best choice ? ) So I configured a new input in the Splunk DB connect to get the value from the DB table    But now the data are not indexed on json format as shown below   How can I get these datas on json format as shown on the first and second capture ?  Hope iyou understand better what I m trying to do    Regards,      
I am trying to setup a dashboard which gives me details like user's current concurrency settings & roles utilization , if someone has implemented this kind of dashboard please help
Thanks for the new screenshot.  It makes the situation a little clearer.  Try this line breaker LINE_BREAKER = ()<\d+> Is @PickleRick said, make sure the settings are on all indexers and heavy forw... See more...
Thanks for the new screenshot.  It makes the situation a little clearer.  Try this line breaker LINE_BREAKER = ()<\d+> Is @PickleRick said, make sure the settings are on all indexers and heavy forwarders and that the instances are restarted after configurations are changed.
When I call: https://api.{REALM}.signalfx.com/v1/timeserieswindow with my access token as header: X-SF-TOKEN I receive: { "message": "API Error: 400", "status": 400, "type": "error" }   ... See more...
When I call: https://api.{REALM}.signalfx.com/v1/timeserieswindow with my access token as header: X-SF-TOKEN I receive: { "message": "API Error: 400", "status": 400, "type": "error" }   The same happens when I add parameters to request: https://api.{REALM}.signalfx.com/v1/timeserieswindow?query=sf_metric:"jvm.cpu.load"&startMs=1489410900000&endMs=1489411205000   Am I missing something?
@ITWhisperer  I don't want to do anything  if none of the events from a particular day have searchmatch yes , I just want the events where searchmatch("ebnc event balanced successfully") is true. ... See more...
@ITWhisperer  I don't want to do anything  if none of the events from a particular day have searchmatch yes , I just want the events where searchmatch("ebnc event balanced successfully") is true. But I want when I am selecting last 7 days it should show last 7 events only 1 event per day I want. So if it would be last 7 days it will look something like this:(only 7 events) EBNCStatus                                                                        true ebnc event balanced successfully                      ✔ ebnc event balanced successfully                     ✔ ebnc event balanced successfully                     ✔ ebnc event balanced successfully                    ✔ ebnc event balanced successfully                      ✔ ebnc event balanced successfully                    ✔ ebnc event balanced successfully                   ✔
Yes, it should have been broken into multiple events but the question is - again - how are you ingesting those logs (and where are you applying your configurations).
Absolute imports: from utils import get_log Relative imports: from .utils import get_log This import line is in  splunk/etc/apps/my_app/bin/myapp.py path of utils                   splunk/etc/... See more...
Absolute imports: from utils import get_log Relative imports: from .utils import get_log This import line is in  splunk/etc/apps/my_app/bin/myapp.py path of utils                   splunk/etc/apps/my_app/bin/utils.py
Which  event do you want to use from each day, the first, the last, only those where searchmatch("ebnc event balanced successfully") is true, what do you want to do if none of the events from a parti... See more...
Which  event do you want to use from each day, the first, the last, only those where searchmatch("ebnc event balanced successfully") is true, what do you want to do if none of the events from a particular day have searchmatch("ebnc event balanced successfully") equating to true? This has been said many times before, you need to be clear what you are trying to achieve.