Im really not sure what this is doing. I incorporated it into my code and was not what I was expecting. I have also shifted my efforts to using a span=15m for my timechart command due to some other ...
See more...
Im really not sure what this is doing. I incorporated it into my code and was not what I was expecting. I have also shifted my efforts to using a span=15m for my timechart command due to some other calculations that are on the dashboard I am working on that using a span/bucket smaller than 15 minutes does not represent the data in the way the user(s) are expecting. So, my next question (I can start a new thread if needed) is that using a span=15m for an hour sample of four 15 min buckets. But, the buckets are at the 15 min mark of each hour and do not start from when the query is run. i.e. Buckets = 0-15, 15-30, 30-45, 45-00. Is there an option on timechart to force it to start at the current minute ? I found in the documentation a reference to <snap-to-time> but dont understand how to use it.
@ITWhisperer
I am using same in my query but not getting correct starttime and end time
query:
index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settle...
See more...
@ITWhisperer
I am using same in my query but not getting correct starttime and end time
query:
index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)"
| transaction TRIM_ID startswith="Reading Control-File /absin/TRIM.CNXCTR." endswith="Completed Settlement file processing, TRIM.CNX."
|eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as TRIM.CNX_Duration| table StartTime EndTime TRIM.CNX_Duration| sort +StartTime +EndTime| fieldformat ProcessingStartTime = strftime(ProcessingStartTime, "%F %T.%3N")| fieldformat ProcessingEndTime = strftime(ProcessingEndTime, "%F %T.%3N")
I want to extract the below contractWithCustomers and contracts using rex named as entity . For ID 1349c1f4-989c-4ea5-94ca-25fc40f6aab8 -flow started put:\contractWithCustomers:application\json:...
See more...
I want to extract the below contractWithCustomers and contracts using rex named as entity . For ID 1349c1f4-989c-4ea5-94ca-25fc40f6aab8 -flow started put:\contractWithCustomers:application\json:bmw-crm-wh-xl-cms-api-config For ID 1697108895 -flow started put:\contracts:application\json:bmw-crm-wh-xl-cms-api-config
Hello, I am trying to make report which will display what notables were closed with what disposition. But unfortunately when I make report, it shows me values as follows: "disposition:1", "dispositi...
See more...
Hello, I am trying to make report which will display what notables were closed with what disposition. But unfortunately when I make report, it shows me values as follows: "disposition:1", "disposition:2" and so on and I cant figure out how to change these values in the way that in chart/graph it will show "false positive" or "true positive". I found out a way to change name of column (rename as) but I cant find a way to change values itself and if I try to use same logic (rename disposition:1 as false positive) it doesnt make the trick. Could you point me in the correct direction, please? Thanks in advance
I see the trim function didn't remove the first quotation mark since it isn't at the beginning of the event (because of the timestamp). Here's another regex to try. It attempts to replace the event...
See more...
I see the trim function didn't remove the first quotation mark since it isn't at the beginning of the event (because of the timestamp). Here's another regex to try. It attempts to replace the event with the text after 'rawJson="' up to the last '"'. | rex mode=sed "s/rawJson=\\\"(.*)\\\"$/\1/"
Unless you are running your search at exactly midnight, the last 7 days will be spread over 8 days. You need to use the relative option in the time picker and align to the start and end of days to ge...
See more...
Unless you are running your search at exactly midnight, the last 7 days will be spread over 8 days. You need to use the relative option in the time picker and align to the start and end of days to get exactly 7 days worth of events
Hi All,
I have created below query:
search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)"
...
See more...
Hi All,
I have created below query:
search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)"
| transaction TRIM_ID startswith="Reading Control-File /absin/TRIM.CNXCTR." endswith="Completed Settlement file processing, TRIM.CNX."
|eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as TRIM.CNX_Duration| table StartTime EndTime TRIM.CNX_Duration| sort +StartTime +EndTime]| fieldformat ProcessingStartTime = strftime(ProcessingStartTime, "%F %T.%3N")| fieldformat ProcessingEndTime = strftime(ProcessingEndTime, "%F %T.%3N")| table starttime EndTime
I am not getting the correct time I am getting in below format:
start time - 1697809010.604
EndTime - 1697809075.170
I want it in this format:
StartTime - 2023-10-20 02:16:56.629
EndTime - 2023-10-20 02:19:57.554
Can someone help me here.
2a. You are right. In this case Indexers ingesting logs via couple of tcp ports. We have load balancer that smears logs on all indexers. You can advertise me some suggestions for architecture if...
See more...
2a. You are right. In this case Indexers ingesting logs via couple of tcp ports. We have load balancer that smears logs on all indexers. You can advertise me some suggestions for architecture if want. I wiil glad to see some good advises When I was creating our Splunk env I decide that it is the most convenient way in our production. It's allow me to easily route events to indexes only by port. 2b. Yes, I'm talking about applying the bundle. I'm sure configuration was applied because I always use "splunk apply cluster-bundle -auth password --answer-yes" and wait for nodes to reboot if they decide to do it.
Same issue. +0000 ERROR ModularInputs [18816 TcpChannelThread] - Argument validation for scheme=proofpoint_tap_siem: killing process, because executing it took too long (over 30000 msecs). For ...
See more...
Same issue. +0000 ERROR ModularInputs [18816 TcpChannelThread] - Argument validation for scheme=proofpoint_tap_siem: killing process, because executing it took too long (over 30000 msecs). For me , i saw this was an OS issue. On Ubuntu the input works, the Redhat boxes dont so ..