All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message... See more...
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message":"{\"Context\":{\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\"},\"LogCollection\":[{\"source\":\"handler.go:44\",\"timestamp\":\"2023-11-30T15:01:07.209285695Z\",\"msg\":{\"specversion\":\"1.0\",\"type\":\"com.cnc.caas.documentgenerationservices.documentgeneration.completed.public\",\"source\":\"/events/caas/documentgenerationservices/record/documentgeneration\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"time\":\"2023-11-30T15:01:06.972071059Z\",\"subject\":\"record-documentgenerationservices-wip\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"datacontenttype\":\"application/json\",\"data\":{\"CAAS\":{\"Event\":{\"Version\":\"2.0.0\",\"EventType\":\"documentgeneration.completed\",\"LifeCycleStatus\":\"wip\",\"EventSequence\":4,\"OriginTimeStamp\":\"2023-11-30T15:01:06.972Z\",\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RequestedDeliveryChannel\":\"Print\",\"RecordedDeliveryChannel\":\"Print\",\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"Priority\":false,\"EventFailedStatus\":0,\"RetryCount\":1,\"Errors\":null,\"OriginalSqsMessage\":{\"data\":{\"CAAS\":{\"Event\":{\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationGroupID\":\"mbrmatreqok\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"Errors\":null,\"EventFailedStatus\":0,\"EventSequence\":4,\"EventType\":\"recordcomposition.response.start\",\"LifeCycleStatus\":\"wip\",\"OriginTimeStamp\":\"2023-11-30T15:00:04.996Z\",\"PreRendered\":false,\"Priority\":false,\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RecordedDeliveryChannel\":\"Print\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RequestedDeliveryChannel\":\"Print\",\"RetryCount\":1,\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"Version\":\"2.0.0\"}}},\"datacontenttype\":\"application/json\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"source\":\"/events/caas/smart/record/composition\",\"specversion\":\"1.0\",\"subject\":\"record-composition-response-start\",\"time\":\"2023-11-30T15:01:05.756937686Z\",\"type\":\"com.cnc.caas.composition.response.start.private\"},\"CommunicationGroupID\":\"mbrmatreqok\",\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"PreRendered\":false}}}}},{\"source\":\"handler.go:46\",\"timestamp\":\"2023-11-30T15:01:07.21572506Z\",\"msg\":\"mongo insert id is 6568a3b3ab042d54478ef071\"}],\"RetryCount\":1,\"level\":\"error\",\"msg\":\"Log collector output\",\"time\":\"2023-11-30T15:01:07Z\"}","kubernetes":{"pod_name":"eventsupdatetomongo-d98bb8594-cnbsd","namespace_name":"caas-composition-layer","pod_id":"50d49842-793a-41c8-a903-11c23607dfd6","labels":{"app":"eventsupdatetomongo","pod-template-hash":"d98bb8594","version":"dcode-801-1.0.1-2745653"},"annotations":{"cattle.io/timestamp":"2023-06-08T22:30:33Z","cni.projectcalico.org/containerID":"58cf3b42ab43fac0a5bf1f97e5a4a7db9dbf6a572705f02480384e63c2a53288","cni.projectcalico.org/podIP":"172.17.224.31/32","cni.projectcalico.org/podIPs":"172.17.224.31/32","kubectl.kubernetes.io/restartedAt":"2023-11-20T17:28:31Z"},"host":"ip-10-168-125-122.ec2.internal","container_name":"eventsupdatetomongo","docker_id":"c83dd87422fbdcae60a40ac50bcad0f387d50f3021975b81dbccac1bc0d965b2","container_hash":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo@sha256:3b7e5e0908cec3f68baa7f9be18397b6ce4aa807f92b98b6b8970edac9780388","container_image":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo:dcode-801-1.0.1-2745653"}}      
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to chang... See more...
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to change the timezone in Splunk>preferences>timezone. wanted to make sure that's the right approach Since I don't want to tweak the whole deployment.  Thank you!! @isoutamo  @gcusello 
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,... See more...
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,longitude "51.5128","-0.0638" But how do i combine this with a search? I was trying this but it doesn't work: | makeresults eval ip_address_integer=1317914622 [ | inputlookup geobeta | where endIPNum >= ip_address_integer and startIPNum <= ip_address_integer ]   Many thanks for hints
Thank you so much for the help!  Once I was able to wrap my head around it and do some tinkering your solution worked perfectly! Here is what I ended up with for the global search: <search> <q... See more...
Thank you so much for the help!  Once I was able to wrap my head around it and do some tinkering your solution worked perfectly! Here is what I ended up with for the global search: <search> <query> | makeresults | addinfo | eval last_week_earliest=relative_time(info_min_time,"-7d") | eval last_week_latest=relative_time(info_max_time,"-7d") </query> <earliest>$Datepkr.earliest$</earliest> <latest>$Datepkr.latest$</latest> <done> <set token="last_week_earliest">$result.last_week_earliest$</set> <set token="last_week_latest">$result.last_week_latest$</set> <eval token="time_span">($result.last_week_latest$ - $result.last_week_earliest$)/60</eval> <eval token="span_value">round($time_span$,0)</eval> </done> </search>   And here is what is in the main search: <search> <query> index=*apievents* request.org_name=$org$ request.env=$env$ request.api_name=$api$ (earliest=$Datepkr.earliest$ latest=$Datepkr.latest$) OR (earliest=$last_week_earliest$ latest=$last_week_latest$) | eval category=if(_time &lt;= $last_week_latest$, "Last Week Volume", "Current Week Volume") | eval _time=if(_time &lt;= $last_week_latest$, _time+(7 * 86400), _time) | timechart cont=f span=$span_value$s count by category </query> </search> I have it in a place now where it works, and looks like I want it to look, I'm just not sure if there was a much easier path to setting the chart beginning / end time and span fields. Initially I didn't have the time_span and span_value tokens and just tried to let the timechart function do its thing automated.  It still kept the full time range of seven days when displaying, so all of the timeshifted events were displaying on the seventh (i.e. current) day.  When I added the cont=f setting things got a bit better, but the chart was displaying in a way that the span field looks like it was still stuck on what it would have been if it were set to a seven day range.  I set it manually to be 1/60 of whatever the user selected time range is in seconds.  That seems to approximate the default behavior of timechart, which looks like it does anywhere from 1/48 to 1/60 depending on what will divide evenly. If there's a simpler solution to that I'd love to know what it is, but like I said what I have there seems to work perfectly for any time range. Thanks again for the help @bowesmana !
Ah interesting, I hadn't seen that specific one before but had seen others in a similar vein. My main hope was to not have to install any new apps as I'm working for a client and it'd create more ... See more...
Ah interesting, I hadn't seen that specific one before but had seen others in a similar vein. My main hope was to not have to install any new apps as I'm working for a client and it'd create more work keeping it up to date, plus I thought a macro (what I was hoping to turn it into) could be easily transferred anywhere. But I think maybe using python/apps is the best bet, has a lot more features and just works better, with a lot more error checking possible.
Hi I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splun... See more...
Hi I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splunk side and you can do this without service breaks. Of course if you haven't needed licenses for your virtualisation / storage layer then it's different story. But I expecting that you have linux VGs in use and you can use those to do this without service breaks or at least with minimum reboot etc. splunk offline --enforce-counters should use only when you are removing the whole node permanently from cluster! r. Ismo
The messages are there for information.  Admins can use them to see the lifespans of buckets and make informed decisions about changes needed to indexes.conf settings.
And little bit more about this https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform
Hi any reason why you don't want to use this https://splunkbase.splunk.com/app/5565 ? r. Ismo
Hi if you don't create / use your own certificates then spunk create automatic it's own with Splunk's default CA. You don't need to do anything, just install and start splunk and you have TLS cert o... See more...
Hi if you don't create / use your own certificates then spunk create automatic it's own with Splunk's default CA. You don't need to do anything, just install and start splunk and you have TLS cert on splunkd. Actually I don' t know if there is any way to use it without TLS cert! If you want to replication port with TLS certs, those you must create and configure by yourself. Default way in PoC is to use plain text connections. If/when you want to use TLS also on those, you should look from docs https://docs.splunk.com/Documentation/Splunk/latest/Security/AboutsecuringyourSplunkconfigurationwithSSL and/or .conf presentation https://conf.splunk.com/files/2023/slides/SEC1936B.pdf r. Ismo
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list... See more...
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list in their stats command. My Logic Looks like this:     | eval time=_time | appendpipe [ | eval converts=split(encoded,"") | mvexpand converts | lookup base64conversion.csv index as converts OUTPUT value as base64bin | table encoded, base64bin, time | mvcombine base64bin | eval combined=mvjoin(base64bin,"") | rex field=combined "(?<asciibin>.{8})" max_match=0 | mvexpand asciibin | lookup base64conversion.csv index as asciibin OUTPUT value as outputs | table encoded, outputs, time | mvcombine outputs | eval decoded=mvjoin(outputs,"") | table encoded, decoded, time ] | selfjoin time     And looks like this in a test environment: This is partially taken from other people's work but so some of it may be familiar to other discussions. My issue is when put into a larger search, it doesn't work for all values, especially the seemingly longer ones. I can't show it in action unfortunately but if you have a number of encoded commands to run it against it will only do it for one of them. I thought this might be because the self join for time is not entirely unique but I'm starting to think it's because I'm not using a stats command before the appendpipe to group by encoded, even when I do that though it still doesn't work. The lookup I'm using is based on the one discussed here: https://community.splunk.com/t5/Splunk-Search/base64-decoding-in-search/m-p/27572 At this point I will likely just install an app if no one can resolve this. I thought I'd ask to get other people's points of view, any help would be much appreciated.
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the addr... See more...
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the address of master node, I am getting the below error:   Is there a way to bypass this or create a dummy certificate for Splunk?
@richgalloway But why this messages occurs ? Any specific reason ?
Thanks for your response and the pieces are starting to fall into place.  It still seems confusing that I'm seeing [script://....] that seems to call PowerShell as well as [powershell://...] also cal... See more...
Thanks for your response and the pieces are starting to fall into place.  It still seems confusing that I'm seeing [script://....] that seems to call PowerShell as well as [powershell://...] also calling PowerShell with one using schedule and the other using interval.  Really looks messy which might explain why .bat and .py scipts seem to work so much better (especially on Unix systems) but this could be my OCD kicking in.
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all... See more...
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all of the pc's are connected to and retrieve the information going out and going in.  Thus I can supervise the information and see if there are anything out of ordinary. If possiAny help is appreciated :).
Those messages are reporting normal behavior.  No action is required so the messages can be ignored.  The messages cannot be suppressed except by changing the logging level.
i added the below to get what I want. | search Country!=<country name>
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two at... See more...
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two attributes. maxHotBuckets = 5 minHotIdleSecsBeforeForceRoll = auto   Please advise if both settings are sufficient to permanently remove the information messages. 11-04-2023 15:40:09.545 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~308~34353497-7F2F-41CB-B772-DAF7007EA623 idx=abs from=hot_v1_308 to=db_1698249739_1698190953_308 size=786313216 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-03-2023 22:07:29.511 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~379~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_379 to=db_1698211695_1698040811_379 size=1048535040 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-01-2023 07:31:25.596 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_audit~69~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_69 to=db_1696240764_1695536757_69 size=786419712 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 19:58:48.033 +0100 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~140~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_140 to=db_1696974841_1696841261_140 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 17:23:48.700 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~303~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_303 to=db_1697800494_1697727845_303 size=785281024 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-29-2023 00:03:30.635 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~376~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_376 to=db_1673823600_1673823600_376 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-27-2023 12:24:16.567 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~138~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_138 to=db_1696587710_1696461161_138 size=786423808 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-25-2023 07:28:42.146 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~374~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_374 to=db_1697476202_1697263512_374 size=1048510464 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-24-2023 06:36:55.716 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~293~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_293 to=db_1697038969_1696983723_293 size=786386944 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-20-2023 13:15:13.165 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~286~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_286 to=db_1696492029_1696421708_286 size=785948672 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-17-2023 08:50:44.494 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~373~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_373 to=db_1697263511_1697083171_373 size=1048502272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-16-2023 19:10:28.534 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~372~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_372 to=db_1697083169_1696908238_372 size=1048461312 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-15-2023 18:10:43.940 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_introspection~230~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_introspection from=hot_v1_230 to=db_1683689783_1619379864_230 size=413696 caller=lru maxHotBuckets=3, count=3 hot buckets + 1 quar bucket,evicting_count=1 LRU hots 10-14-2023 21:26:48.653 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_audit~67~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_67 to=db_1694945963_1694438187_67 size=786403328 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 08:06:09.886 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~369~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_369 to=db_1696504588_1696317607_369 size=1047363584 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 05:02:31.677 +0200 INFO HotBucketRoller - finished moving hot to warm bid=wmc~44~34353497-7F2F-41CB-B772-DAF7007EA623 idx=www from=hot_v1_44 to=db_1695949104_1695348831_44 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-12-2023 05:59:51.941 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~367~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_367 to=db_1696102911_1695901400_367 size=1048420352 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-11-2023 17:43:09.179 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~284~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_284 to=db_1696364124_1696299722_284 size=786280448 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-10-2023 23:54:56.050 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~135~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_135 to=db_1696039435_1695914107_135 size=786350080 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots  
What is your search?
ok, i found the problem one of the panels that updates the table is trellis_pie and this token is the one that brakes every thing so i guess i configured it wrong what is the right way to configur... See more...
ok, i found the problem one of the panels that updates the table is trellis_pie and this token is the one that brakes every thing so i guess i configured it wrong what is the right way to configure trellis_pie token ? this is the source: <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">pie</option> <option name="charting.chart.sliceCollapsingThreshold">0</option> <option name="charting.drilldown">all</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <drilldown> <set token="host_exposure_level">$row.category$</set> </drilldown> </chart>