All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @venkateshn2382 , did you tried to use INDEXED_EXTRACTIONS = json in your props.conf? you can find more details at https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf  This opti... See more...
hi @venkateshn2382 , did you tried to use INDEXED_EXTRACTIONS = json in your props.conf? you can find more details at https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf  This option must be located in the Universal Forwarder and in the Heavy Forwarder (if present) and in the Search Heads. Ciao. Giuseppe
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as mark... See more...
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as marked in attached screen shot) as these are huge unwanted logs.     Our search query is : index=cato sourcetype=cato_source   We tried multiple ways by editing the props.conf and transforms.conf to exclude these logs as below but none of them are successful to exclude those logs;   props.conf [sourcetype::cato_source] TRANSFORMS-filter_logs = cloudparsing   transforms.conf [cloudparsing] REGEX = \"event_sub_type\":\"(WAN Firewall|TLS)\" DEST_KEY = queue FORMAT = nullQueue   Can someone please guide how to exclude these events whose “event_sub_type” value contains either “WAN Firewall” or “TLS” by editing props.conf and transforms.conf?     RAW Events for reference which needs to be excluded ; 1. event_sub_type":"WAN   {"event_count":1,"ISP_name":"Shanghai internet","rule":"Initial Connectivity Rule","dest_is_site_or_vpn":"Site","src_isp_ip":"0.0.0.0","time_str":"2023-11-28T04:27:40Z","src_site":"CHINA-AZURE-E2","src_ip":"0.0.0.1","internalId":"54464646","dest_site_name":"china_112,"event_type":"Security","src_country_code":"CN","action":"Monitor","subnet_name":"cn-001.net-vnet-1","pop_name":"Shanghai_1","dest_port":443,"dest_site":"china_connect","rule_name":"Initial Connectivity Rule","event_sub_type":"WAN Firewall","insertionDate":1701188916690,"ip_protocol":"TCP","rule_id":"101238","src_is_site_or_vpn":"Site","account_id":5555,"application":"HTTP(S)","src_site_name":"china_connect","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_ANDROID","app_stack""TCP","TLS","HTTP(S)"],"time":1701188860834}   2. "event_sub_type":"TLS","   {"event_count":4,"http_host_name":"isp.vpn","ISP_name":"China_internet","src_isp_ip":"0.0.0.0","tls_version":"TLSv1.3","time_str":"2023-11-28T04:27:16Z","src_site":"china_mtt","src_ip":"0.0.0.0","internalId":"rtrgrtr","domain_name":"china.gh.com","event_type":"Security","src_country_code":"CN","tls_error_description":"unknown CA","action":"Alert","subnet_name":"0.0.0.0/24","pop_name":"china_1","dest_port":443,"event_sub_type":"TLS","insertionDate":1701188915580,"dest_country_code":"SG","tls_error_type":"fatal","dns_name":"china.com","traffic_direction":"OUTBOUND","src_is_site_or_vpn":"Site","account_id":56565,"application":"Netskope","src_site_name":"CHINA-44","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_WINDOWS","time":1701188836011,"dest_country":"Singapore"}    
Hi @_pravin , as I said, if you run  | mstats sum("mx.process.logs") as count WHERE "index"="mx_metrics" mx.env=$mx.env$ log.type=log span=10s BY pid service.name replica.name service.type module.n... See more...
Hi @_pravin , as I said, if you run  | mstats sum("mx.process.logs") as count WHERE "index"="mx_metrics" mx.env=$mx.env$ log.type=log span=10s BY pid service.name replica.name service.type module.names severity host cmd mx.env\ | rename module.names as Module | rename host as Hostname | rename severity as lvl | rename pid as PID have you null (or similar) values for the Module field? Ciao. Giuseppe
Hi @vishenps , you can set up the Timezone at system level or at user level, as you prefer, there isn't abest practice for this, use the one you require for your final users. Ciao. Giuseppe
Hi @MattKr , what have you to do? use the lookup's geo coordinates to filter results or what else? if you want to use the values in the lookup for a subsearch, you have to use the rules of a subsea... See more...
Hi @MattKr , what have you to do? use the lookup's geo coordinates to filter results or what else? if you want to use the values in the lookup for a subsearch, you have to use the rules of a subsearch, so the fields in the subsearch must have the same field names. Then you can use thewhere clause inside the inputlookup command. Put attention that the AND logical operator must be in uppercase to be recognized: | inputlookup geobeta WHERE endIPNum>=1317914622 AND startIPNum<=1317914622 | table latitude longitude Then, how are your IP address written? At least, using lookups, you can also have CIDR match type as described at https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Usefieldlookupstoaddinformationtoyourevents  Ciao. Giuseppe
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message... See more...
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message":"{\"Context\":{\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\"},\"LogCollection\":[{\"source\":\"handler.go:44\",\"timestamp\":\"2023-11-30T15:01:07.209285695Z\",\"msg\":{\"specversion\":\"1.0\",\"type\":\"com.cnc.caas.documentgenerationservices.documentgeneration.completed.public\",\"source\":\"/events/caas/documentgenerationservices/record/documentgeneration\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"time\":\"2023-11-30T15:01:06.972071059Z\",\"subject\":\"record-documentgenerationservices-wip\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"datacontenttype\":\"application/json\",\"data\":{\"CAAS\":{\"Event\":{\"Version\":\"2.0.0\",\"EventType\":\"documentgeneration.completed\",\"LifeCycleStatus\":\"wip\",\"EventSequence\":4,\"OriginTimeStamp\":\"2023-11-30T15:01:06.972Z\",\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RequestedDeliveryChannel\":\"Print\",\"RecordedDeliveryChannel\":\"Print\",\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"Priority\":false,\"EventFailedStatus\":0,\"RetryCount\":1,\"Errors\":null,\"OriginalSqsMessage\":{\"data\":{\"CAAS\":{\"Event\":{\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationGroupID\":\"mbrmatreqok\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"Errors\":null,\"EventFailedStatus\":0,\"EventSequence\":4,\"EventType\":\"recordcomposition.response.start\",\"LifeCycleStatus\":\"wip\",\"OriginTimeStamp\":\"2023-11-30T15:00:04.996Z\",\"PreRendered\":false,\"Priority\":false,\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RecordedDeliveryChannel\":\"Print\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RequestedDeliveryChannel\":\"Print\",\"RetryCount\":1,\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"Version\":\"2.0.0\"}}},\"datacontenttype\":\"application/json\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"source\":\"/events/caas/smart/record/composition\",\"specversion\":\"1.0\",\"subject\":\"record-composition-response-start\",\"time\":\"2023-11-30T15:01:05.756937686Z\",\"type\":\"com.cnc.caas.composition.response.start.private\"},\"CommunicationGroupID\":\"mbrmatreqok\",\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"PreRendered\":false}}}}},{\"source\":\"handler.go:46\",\"timestamp\":\"2023-11-30T15:01:07.21572506Z\",\"msg\":\"mongo insert id is 6568a3b3ab042d54478ef071\"}],\"RetryCount\":1,\"level\":\"error\",\"msg\":\"Log collector output\",\"time\":\"2023-11-30T15:01:07Z\"}","kubernetes":{"pod_name":"eventsupdatetomongo-d98bb8594-cnbsd","namespace_name":"caas-composition-layer","pod_id":"50d49842-793a-41c8-a903-11c23607dfd6","labels":{"app":"eventsupdatetomongo","pod-template-hash":"d98bb8594","version":"dcode-801-1.0.1-2745653"},"annotations":{"cattle.io/timestamp":"2023-06-08T22:30:33Z","cni.projectcalico.org/containerID":"58cf3b42ab43fac0a5bf1f97e5a4a7db9dbf6a572705f02480384e63c2a53288","cni.projectcalico.org/podIP":"172.17.224.31/32","cni.projectcalico.org/podIPs":"172.17.224.31/32","kubectl.kubernetes.io/restartedAt":"2023-11-20T17:28:31Z"},"host":"ip-10-168-125-122.ec2.internal","container_name":"eventsupdatetomongo","docker_id":"c83dd87422fbdcae60a40ac50bcad0f387d50f3021975b81dbccac1bc0d965b2","container_hash":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo@sha256:3b7e5e0908cec3f68baa7f9be18397b6ce4aa807f92b98b6b8970edac9780388","container_image":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo:dcode-801-1.0.1-2745653"}}      
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to chang... See more...
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to change the timezone in Splunk>preferences>timezone. wanted to make sure that's the right approach Since I don't want to tweak the whole deployment.  Thank you!! @isoutamo  @gcusello 
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,... See more...
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,longitude "51.5128","-0.0638" But how do i combine this with a search? I was trying this but it doesn't work: | makeresults eval ip_address_integer=1317914622 [ | inputlookup geobeta | where endIPNum >= ip_address_integer and startIPNum <= ip_address_integer ]   Many thanks for hints
Thank you so much for the help!  Once I was able to wrap my head around it and do some tinkering your solution worked perfectly! Here is what I ended up with for the global search: <search> <q... See more...
Thank you so much for the help!  Once I was able to wrap my head around it and do some tinkering your solution worked perfectly! Here is what I ended up with for the global search: <search> <query> | makeresults | addinfo | eval last_week_earliest=relative_time(info_min_time,"-7d") | eval last_week_latest=relative_time(info_max_time,"-7d") </query> <earliest>$Datepkr.earliest$</earliest> <latest>$Datepkr.latest$</latest> <done> <set token="last_week_earliest">$result.last_week_earliest$</set> <set token="last_week_latest">$result.last_week_latest$</set> <eval token="time_span">($result.last_week_latest$ - $result.last_week_earliest$)/60</eval> <eval token="span_value">round($time_span$,0)</eval> </done> </search>   And here is what is in the main search: <search> <query> index=*apievents* request.org_name=$org$ request.env=$env$ request.api_name=$api$ (earliest=$Datepkr.earliest$ latest=$Datepkr.latest$) OR (earliest=$last_week_earliest$ latest=$last_week_latest$) | eval category=if(_time &lt;= $last_week_latest$, "Last Week Volume", "Current Week Volume") | eval _time=if(_time &lt;= $last_week_latest$, _time+(7 * 86400), _time) | timechart cont=f span=$span_value$s count by category </query> </search> I have it in a place now where it works, and looks like I want it to look, I'm just not sure if there was a much easier path to setting the chart beginning / end time and span fields. Initially I didn't have the time_span and span_value tokens and just tried to let the timechart function do its thing automated.  It still kept the full time range of seven days when displaying, so all of the timeshifted events were displaying on the seventh (i.e. current) day.  When I added the cont=f setting things got a bit better, but the chart was displaying in a way that the span field looks like it was still stuck on what it would have been if it were set to a seven day range.  I set it manually to be 1/60 of whatever the user selected time range is in seconds.  That seems to approximate the default behavior of timechart, which looks like it does anywhere from 1/48 to 1/60 depending on what will divide evenly. If there's a simpler solution to that I'd love to know what it is, but like I said what I have there seems to work perfectly for any time range. Thanks again for the help @bowesmana !
Ah interesting, I hadn't seen that specific one before but had seen others in a similar vein. My main hope was to not have to install any new apps as I'm working for a client and it'd create more ... See more...
Ah interesting, I hadn't seen that specific one before but had seen others in a similar vein. My main hope was to not have to install any new apps as I'm working for a client and it'd create more work keeping it up to date, plus I thought a macro (what I was hoping to turn it into) could be easily transferred anywhere. But I think maybe using python/apps is the best bet, has a lot more features and just works better, with a lot more error checking possible.
Hi I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splun... See more...
Hi I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splunk side and you can do this without service breaks. Of course if you haven't needed licenses for your virtualisation / storage layer then it's different story. But I expecting that you have linux VGs in use and you can use those to do this without service breaks or at least with minimum reboot etc. splunk offline --enforce-counters should use only when you are removing the whole node permanently from cluster! r. Ismo
The messages are there for information.  Admins can use them to see the lifespans of buckets and make informed decisions about changes needed to indexes.conf settings.
And little bit more about this https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform
Hi any reason why you don't want to use this https://splunkbase.splunk.com/app/5565 ? r. Ismo
Hi if you don't create / use your own certificates then spunk create automatic it's own with Splunk's default CA. You don't need to do anything, just install and start splunk and you have TLS cert o... See more...
Hi if you don't create / use your own certificates then spunk create automatic it's own with Splunk's default CA. You don't need to do anything, just install and start splunk and you have TLS cert on splunkd. Actually I don' t know if there is any way to use it without TLS cert! If you want to replication port with TLS certs, those you must create and configure by yourself. Default way in PoC is to use plain text connections. If/when you want to use TLS also on those, you should look from docs https://docs.splunk.com/Documentation/Splunk/latest/Security/AboutsecuringyourSplunkconfigurationwithSSL and/or .conf presentation https://conf.splunk.com/files/2023/slides/SEC1936B.pdf r. Ismo
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list... See more...
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list in their stats command. My Logic Looks like this:     | eval time=_time | appendpipe [ | eval converts=split(encoded,"") | mvexpand converts | lookup base64conversion.csv index as converts OUTPUT value as base64bin | table encoded, base64bin, time | mvcombine base64bin | eval combined=mvjoin(base64bin,"") | rex field=combined "(?<asciibin>.{8})" max_match=0 | mvexpand asciibin | lookup base64conversion.csv index as asciibin OUTPUT value as outputs | table encoded, outputs, time | mvcombine outputs | eval decoded=mvjoin(outputs,"") | table encoded, decoded, time ] | selfjoin time     And looks like this in a test environment: This is partially taken from other people's work but so some of it may be familiar to other discussions. My issue is when put into a larger search, it doesn't work for all values, especially the seemingly longer ones. I can't show it in action unfortunately but if you have a number of encoded commands to run it against it will only do it for one of them. I thought this might be because the self join for time is not entirely unique but I'm starting to think it's because I'm not using a stats command before the appendpipe to group by encoded, even when I do that though it still doesn't work. The lookup I'm using is based on the one discussed here: https://community.splunk.com/t5/Splunk-Search/base64-decoding-in-search/m-p/27572 At this point I will likely just install an app if no one can resolve this. I thought I'd ask to get other people's points of view, any help would be much appreciated.
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the addr... See more...
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the address of master node, I am getting the below error:   Is there a way to bypass this or create a dummy certificate for Splunk?
@richgalloway But why this messages occurs ? Any specific reason ?
Thanks for your response and the pieces are starting to fall into place.  It still seems confusing that I'm seeing [script://....] that seems to call PowerShell as well as [powershell://...] also cal... See more...
Thanks for your response and the pieces are starting to fall into place.  It still seems confusing that I'm seeing [script://....] that seems to call PowerShell as well as [powershell://...] also calling PowerShell with one using schedule and the other using interval.  Really looks messy which might explain why .bat and .py scipts seem to work so much better (especially on Unix systems) but this could be my OCD kicking in.
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all... See more...
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all of the pc's are connected to and retrieve the information going out and going in.  Thus I can supervise the information and see if there are anything out of ordinary. If possiAny help is appreciated :).