All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My colleagues and I have been biting our tongues trying to get the partitioning right with the slim-packaging-toolkit. We manage to package the technical add-on from the source code into the tar.gz... See more...
My colleagues and I have been biting our tongues trying to get the partitioning right with the slim-packaging-toolkit. We manage to package the technical add-on from the source code into the tar.gz file and then execute the command to get the packages for the specified workloads, but they don't differ. We have been trying several things regarding the "tasks" and "inputGroups" specification in the app.manifest file, but we can't manage to split the app so that all the python stuff is only in the forwarder partition and none of it is in the searchhead partition to install it via self-service in splunkcloud. Any kind of help and all materials and resources, apart from the standard documentation, which unfortunately doesn't help much, is very welcome and appreciated.
References: <a href="https://cwe.mtaci.org/dada/definitions/32.html">CWE</a> <a href="https://wnde.org/www-community/goto/Command_defendeon">wnde</a>   I want to extract these into two separate   ... See more...
References: <a href="https://cwe.mtaci.org/dada/definitions/32.html">CWE</a> <a href="https://wnde.org/www-community/goto/Command_defendeon">wnde</a>   I want to extract these into two separate   https://wnde.org/www-community/goto/Command_defendeon https://cwe.mtaci.org/dada/definitions/32.html      
   Good morning This was on one of my search heads. Can anyone help or point me in the right direction for this. Some answers were not very clear to me. If someone can expand that wou... See more...
   Good morning This was on one of my search heads. Can anyone help or point me in the right direction for this. Some answers were not very clear to me. If someone can expand that would be fantastic    Thanks
Hi at all, I configured for my Heavy Forwarder the following values of queues:   [queue=typingQueue] maxSize = 100MB [queue=indexQueue] maxSize = 100MB [queue=aggQueue] maxSize = 100MB [queue=p... See more...
Hi at all, I configured for my Heavy Forwarder the following values of queues:   [queue=typingQueue] maxSize = 100MB [queue=indexQueue] maxSize = 100MB [queue=aggQueue] maxSize = 100MB [queue=parsingQueue] maxSize = 100MB   but when I check the queues I find:   2 - Aggregation Queue 102400 3 - Typing Queue 102400 1 - Parsing Queue 512 4 - Indexing Queue 102400   What could it be the problem? Why parsingQueue hasn't the correct value? could it be another location where this value is setted? Ciao. Giuseppe
Hi, I got a table data with 3 fields (Time, Method, Return) Time Method Return 28/10/2022 Method 1 KO 28/10/2022 Method 2 KO 28/10/2022 Method 1 OK 2... See more...
Hi, I got a table data with 3 fields (Time, Method, Return) Time Method Return 28/10/2022 Method 1 KO 28/10/2022 Method 2 KO 28/10/2022 Method 1 OK 28/10/2022 Method 1 OK 28/10/2022 Method 1 OK 28/10/2022 Method 1 OK ... ... ... 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK 29/10/2022 Method 2 OK   I'd like to make a timechart with double agregation (one per Method, then one per Return) to get this kind of chart The only thing I can do for the moment is this chart with this request     | timechart count(eval(Return="KO")) as KO count(eval(Return="OK")) as OK by Method     Do you know how I can get the first timechart ?   Thanks  
I am trying to use the rex command to extract an id number, which is a mixture of letters and numbers separated by a dash. For example, one of the ids looks like this: 34gv564-3333-5tg4-gt53-4rgt5e... See more...
I am trying to use the rex command to extract an id number, which is a mixture of letters and numbers separated by a dash. For example, one of the ids looks like this: 34gv564-3333-5tg4-gt53-4rgt5eg5g35gb The field itself is as follows: MFA challenge succeeded for account aaaaaaaaa with email example@example.co.uk. Session id is 34gv564-3333-5tg4-gt53-4rgt5eg5g35gb The rex command I'm using is as follows:   | rex "(?i) is (?<id_number>[^\"]+)"   The only problem is that sometimes it extracts the email address bit too Any help would be greatly appreciated
Hi, In the indexer clustering> Data Durability . Root cause(s):    .   search Factor is not met       . Unhealthy Instances:       .com1-536367373.xxx.splunkcloud.com   how to fix this ... See more...
Hi, In the indexer clustering> Data Durability . Root cause(s):    .   search Factor is not met       . Unhealthy Instances:       .com1-536367373.xxx.splunkcloud.com   how to fix this issue ? Thanks...              
Hi, i'm trying to extract substring from a field1 to create field3 and then match field2 with field3    The search is: index=antispam sourcetype=forcepointmail:sec  | fields msg suser from | ... See more...
Hi, i'm trying to extract substring from a field1 to create field3 and then match field2 with field3    The search is: index=antispam sourcetype=forcepointmail:sec  | fields msg suser from | where NOT LIKE(suser,"%".from."%") But from=Domain noreply <noreply@domain.com>  suser=noreply@domain.com I need to extract the substring contained between <> in the "from"  field and match field "suser" with "created_field" .   I want to find each mail where the "From" field is different from "suser" field, so I can find spoofed mails on our antispam device.   thx
Hi Splunkers, I have a doubt about the Splunk parsing capacity. Until now, every time I needed to parse data, I used add-on, both custome wrote by me and downloaded from Splunk base. If I remeber... See more...
Hi Splunkers, I have a doubt about the Splunk parsing capacity. Until now, every time I needed to parse data, I used add-on, both custome wrote by me and downloaded from Splunk base. If I remeber well, but correct me if I'm wrong, an add-on is not required (or may be not required) if we have a well structured data format, like JSON or XML . My question is: if the above assumption is right, are there any other case where Splunk can perform parsing without an add-on help? And if yes, what are they?
I'm trying to filter data that is either pass or fail. Some of my data points that are fail return as a pass as well. Is there a way to show my data that only fail and never pass at a later time frame?
Hi there We use Enterprise Security and one of our most valuable data sources is Sysmon. We rely on it primarily for process start and network/dns events. We previously used the index to write corr... See more...
Hi there We use Enterprise Security and one of our most valuable data sources is Sysmon. We rely on it primarily for process start and network/dns events. We previously used the index to write correlation searches for our security use cases. Of course it makes much more sense to instead use the data models which is what we are now trying to do. If we look at the https://docs.splunk.com/Documentation/CIM/5.0.2/User/Endpoint data model for processes and the fields available there, it seems obvious that this is meant for "process start" events. The "action" field refers to default values such as allowed, blocked, and deferred and there is no other field to differentiate process events of different types. How would I make a distinction between process termination and process execution for example? It seems you can't. As mentioned in the subject we use the official Splunk Add-on for Sysmon and are frankly a bit confused by how the SysMon events have been mapped. The app is mapping the SysmonID's 1, 5, 6, 7, 8, 9, 10, 15, 17, 18, 24, 25 into the processes. This includes among others "FileCreateStreamHash", "PipeEvent" and "ClipboardChange". Now sure, these are actions executed by processes but what isn't? These and many other event ID's in the list are not only thematically questionable but also miss most of the fields available in the data model. Writing a search based on that data model mapping to find Sysmon process start events is impossible. It also has other issues. We have the "CreateRemoteThread" event which maps the "SourceImage" into "process_path" AND "parent_process_path" which is just plain wrong. The parent process in that case was, as expected, another process entirely. That's one example among many. So, do you use this App and if so how do you deal with these issues? We either have to manipulate the app to work in a way that makes sense or just ignore it and map everything ourselves.
Hi, I have a simple AWS environment, and want to create an EC2 instance with the Splunk SOAR (On-premises) AMI from the Amazon Marketplace running on it. I am following these instructions from th... See more...
Hi, I have a simple AWS environment, and want to create an EC2 instance with the Splunk SOAR (On-premises) AMI from the Amazon Marketplace running on it. I am following these instructions from the Splunk Docs. The issue I am facing is that when I attempt to log in to the deployed SOAR instance (after giving it 20 mins to initialise), I receive an DNS error as shown on the screenshot below. I am using the public IP address from the AWS console. Has someone an idea? Thanks in advance for your help and support!  
Hi all, https not enabled in out HF so we are configuring SSL certificate in our HF. please let us the steps to follow.   Thanks 
Hi,   I have a Splunk role and the allowed index is index=api.   There are a number of users that are part of this role. But I dont want to allow all users part of this role to see all log... See more...
Hi,   I have a Splunk role and the allowed index is index=api.   There are a number of users that are part of this role. But I dont want to allow all users part of this role to see all logs. Only those that are relevant to them. These logs can be identified by a specific field called org. Eg. org=X org=Y org=Z (I only want specific users in this role to have access to the org field that is relevant to them) Is it possible to restrict this at that level? Or would we need to to create separate roles and indexes to achieve this granular access?
I have the following criteria from a single event that appears like: Time Event 11/4/22 4:10:28.000 AM { [-] Total: 6656 srv110: 1002 srv111: 1105 srv112: 1007 srv113: 995 srv114: 1269 sr... See more...
I have the following criteria from a single event that appears like: Time Event 11/4/22 4:10:28.000 AM { [-] Total: 6656 srv110: 1002 srv111: 1105 srv112: 1007 srv113: 995 srv114: 1269 srv115: 1278 } <My Query>| timechart span=1m values(srv*) will return the values as so: _time values(srv110) values(srv111) values(srv112) values(srv113) values(srv114) values(srv115) 11/4/2022 4:04 1003 1105 1007 996 1268 1278   But I need to return all of them as so even if any one of those values falls under 800 but also greater than -1.   I attempted to transpose and search from there but I'm failing somewhere.   Any help or nudge in the right direction would be greatly appreciated.  Thank you!  
I want to achieve something like this: index=main servicetype="aws:accesslogs" (apps in ("app1","app2","app3")) note: app1, app2, app3 are static value which is extracted from static json object ... See more...
I want to achieve something like this: index=main servicetype="aws:accesslogs" (apps in ("app1","app2","app3")) note: app1, app2, app3 are static value which is extracted from static json object (not coming from search) I want to build subsearch to get the extract values from json and use it in primary search. Which of the generating command i can use in subsearch? I am not getting result when i use search command. when i run the subsearch separately with makeresults i get the value.  
Hi. How do I combine these two fields, since the username is similar? The result of my query is the following: user                                                                             ... See more...
Hi. How do I combine these two fields, since the username is similar? The result of my query is the following: user                                                                                                  EventID        count ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3                                                                         |   4740     |     3            | ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3                                                                         |   4767     |      3           |  ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3@domain.com                                           |   4625     |    10          | ----------------------------------------------------------------- |------------|-------------| I would like the following: user                                                           EventID                                    count -----------------------------------------------------|------------------- |--------------| dsanchez.ext3                                                     |       4740             |        3           | dsanchez.ext3                                                     |       4767             |        3           | dsanchez.ext3@domain.com                       |      4625              |       10         | ----------------------------------------------------------------------------------------- My query is: index=oswinsec user=dsanchez* EventID=4625 OR EventID=4740 OR EventID=4767 |stats count by user, EventID
Hey everyone, This might be a bit of a silly question, but I've not seen it answered definitively and anyone I have asked regarding this also has not been able to advise. I am working on fixing a... See more...
Hey everyone, This might be a bit of a silly question, but I've not seen it answered definitively and anyone I have asked regarding this also has not been able to advise. I am working on fixing a deployment server and re-introducing the forwarder management to a Splunk environment, a previous iteration used it but oddly not the current one. And I was wondering, if I enable Forwarder Management will that cause any issues with already existed forwarders that have some custom stanza's in their inputs.conf (so resetting to a default state or to the state present on the deployment server). Or will that only take place when going through the process of getting server classes set? Cheers! 
Hi All,      I need to write regular expression for the below log to extract few fields. Can you please help me on that. Here is the log: {"log":"[14:38:36.117] [INFO ] [] [c.c.n.b.i.DefaultBu... See more...
Hi All,      I need to write regular expression for the below log to extract few fields. Can you please help me on that. Here is the log: {"log":"[14:38:36.117] [INFO ] [] [c.c.n.b.i.DefaultBusinessEventService] [akka://MmsAuCluster/system/sharding/notificationEnrichmentBpmn/0/oR6fulqKQOmr0axiUzCI2w_10/oR6fulqKQOmr0axiUzCI2w] - method=prepare; triggerName=creationCompleted, entity={'id'='2957b3205bf211ed8ded12d15e0c927a_1972381_29168b705bf211ed8ded12d15e0c927a','eventCode'='MDMT.MANDATE_CREATION_COMPLETED','paymentSystemId'='MMS','servicingAgentBIC'='null','messageIdentification'='2957b3205bf211ed8ded12d15e0c927a','businessDomainName'='Mandate','catalogCode'='MDMT','functionCode'='MANDATE_CREATION_COMPLETED','eventCodeDescription'='Mandate creation request completed','subjectEntityType'='MNDT','type'='MSG_DATA','dataFormat'='JSON','dataEncoding'='UTF-8','requestBody'='null''responseBody'='class ChannelNotification3 { mmsServicerBic: CTBAAUSNBKW trigger: MCRT priority: NORM mandateIdentification: 29168b705bf211ed8ded12d15e0c927a bulkIdentification: null reportIdentification: null actionIdentification: 2916b2805bf211ed8ded12d15e0c927a portingIdentification: null actionExpiryTime: null resolutionRequestedBy: null bulkItemResult: null }'} \n","stream":"stdout","docker":{"container_id":"1cbf6fee4ccb236146b7d66fd2f60e4d47c89012fba7679083141eb9a5342a94"},"kubernetes":{"container_name":"mms-au","namespace_name":"msaas-t4","pod_name":"mms-au-b-1-67d78896c6-c5t7s","container_image":"pso.docker.internal.cba/mms-au:2.3.2-0-1-ff0ef7b23","container_image_id":"docker-pullable://pso.docker.internal.cba/mms-au@sha256:cd39a1f76bb50314638a4b7642aa21d7280eca5923298db0b07df63a276bdd34","pod_id":"f649125d-2978-41ea-908f-f99aa84134f3","pod_ip":"100.64.85.236","host":"ip-10-3-197-109.ap-southeast-2.compute.internal","labels":{"app":"mms-au","dc":"b-1","pod-template-hash":"67d78896c6","release":"mms-au"},"master_url":"https://172.20.0.1:443/api","namespace_id":"48ee871a-7e60-45c4-b0f4-ee320a9512f5","namespace_labels":{"argocd.argoproj.io/instance":"appspaces","ci":"CM0953076","kubernetes.io/metadata.name":"msaas-t4","name":"msaas-t4","platform":"PSU","service_owner":"somersd","spg":"CBA_PAYMENTS_TEST_COORDINATION"}},"hostname":"ip-10-3-197-109.ap-southeast-2.compute.internal","host_ip":"10.3.197.109","cluster":"nonprod/pmn02"} I need to extract fields called event code,trigger,mmsservicerbic - these 3 are highlighted above.as those are in different format and under log sub field i am not able to write. Can anyone help please Thanks in Advance
I'm pulling in events from the journal of a number of Linux hosts using the journald modular input. I'm seeing truncated events every so often and, when I look at the length of _raw, I see that it'... See more...
I'm pulling in events from the journal of a number of Linux hosts using the journald modular input. I'm seeing truncated events every so often and, when I look at the length of _raw, I see that it's always 4088 bytes. The man page for journalctl (https://www.freedesktop.org/software/systemd/man/journalctl.html) says that when events are outputted using JSON format, that "Fields larger than 4096 bytes are encoded as null values. (This may be turned off by passing --all, but be aware that this may allocate overly long JSON objects.)" I'm presuming that that's what happening with the truncated events that I'm seeing.   Is anyone aware of a way around this?  I can't see any configuration setting associated with the journald modular input that would let me enable the '--all' flag. FWIW, I'm running Splunk Enterprise 9.0.2