All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to filter data that is either pass or fail. Some of my data points that are fail return as a pass as well. Is there a way to show my data that only fail and never pass at a later time frame?
Hi there We use Enterprise Security and one of our most valuable data sources is Sysmon. We rely on it primarily for process start and network/dns events. We previously used the index to write corr... See more...
Hi there We use Enterprise Security and one of our most valuable data sources is Sysmon. We rely on it primarily for process start and network/dns events. We previously used the index to write correlation searches for our security use cases. Of course it makes much more sense to instead use the data models which is what we are now trying to do. If we look at the https://docs.splunk.com/Documentation/CIM/5.0.2/User/Endpoint data model for processes and the fields available there, it seems obvious that this is meant for "process start" events. The "action" field refers to default values such as allowed, blocked, and deferred and there is no other field to differentiate process events of different types. How would I make a distinction between process termination and process execution for example? It seems you can't. As mentioned in the subject we use the official Splunk Add-on for Sysmon and are frankly a bit confused by how the SysMon events have been mapped. The app is mapping the SysmonID's 1, 5, 6, 7, 8, 9, 10, 15, 17, 18, 24, 25 into the processes. This includes among others "FileCreateStreamHash", "PipeEvent" and "ClipboardChange". Now sure, these are actions executed by processes but what isn't? These and many other event ID's in the list are not only thematically questionable but also miss most of the fields available in the data model. Writing a search based on that data model mapping to find Sysmon process start events is impossible. It also has other issues. We have the "CreateRemoteThread" event which maps the "SourceImage" into "process_path" AND "parent_process_path" which is just plain wrong. The parent process in that case was, as expected, another process entirely. That's one example among many. So, do you use this App and if so how do you deal with these issues? We either have to manipulate the app to work in a way that makes sense or just ignore it and map everything ourselves.
Hi, I have a simple AWS environment, and want to create an EC2 instance with the Splunk SOAR (On-premises) AMI from the Amazon Marketplace running on it. I am following these instructions from th... See more...
Hi, I have a simple AWS environment, and want to create an EC2 instance with the Splunk SOAR (On-premises) AMI from the Amazon Marketplace running on it. I am following these instructions from the Splunk Docs. The issue I am facing is that when I attempt to log in to the deployed SOAR instance (after giving it 20 mins to initialise), I receive an DNS error as shown on the screenshot below. I am using the public IP address from the AWS console. Has someone an idea? Thanks in advance for your help and support!  
Hi all, https not enabled in out HF so we are configuring SSL certificate in our HF. please let us the steps to follow.   Thanks 
Hi,   I have a Splunk role and the allowed index is index=api.   There are a number of users that are part of this role. But I dont want to allow all users part of this role to see all log... See more...
Hi,   I have a Splunk role and the allowed index is index=api.   There are a number of users that are part of this role. But I dont want to allow all users part of this role to see all logs. Only those that are relevant to them. These logs can be identified by a specific field called org. Eg. org=X org=Y org=Z (I only want specific users in this role to have access to the org field that is relevant to them) Is it possible to restrict this at that level? Or would we need to to create separate roles and indexes to achieve this granular access?
I have the following criteria from a single event that appears like: Time Event 11/4/22 4:10:28.000 AM { [-] Total: 6656 srv110: 1002 srv111: 1105 srv112: 1007 srv113: 995 srv114: 1269 sr... See more...
I have the following criteria from a single event that appears like: Time Event 11/4/22 4:10:28.000 AM { [-] Total: 6656 srv110: 1002 srv111: 1105 srv112: 1007 srv113: 995 srv114: 1269 srv115: 1278 } <My Query>| timechart span=1m values(srv*) will return the values as so: _time values(srv110) values(srv111) values(srv112) values(srv113) values(srv114) values(srv115) 11/4/2022 4:04 1003 1105 1007 996 1268 1278   But I need to return all of them as so even if any one of those values falls under 800 but also greater than -1.   I attempted to transpose and search from there but I'm failing somewhere.   Any help or nudge in the right direction would be greatly appreciated.  Thank you!  
I want to achieve something like this: index=main servicetype="aws:accesslogs" (apps in ("app1","app2","app3")) note: app1, app2, app3 are static value which is extracted from static json object ... See more...
I want to achieve something like this: index=main servicetype="aws:accesslogs" (apps in ("app1","app2","app3")) note: app1, app2, app3 are static value which is extracted from static json object (not coming from search) I want to build subsearch to get the extract values from json and use it in primary search. Which of the generating command i can use in subsearch? I am not getting result when i use search command. when i run the subsearch separately with makeresults i get the value.  
Hi. How do I combine these two fields, since the username is similar? The result of my query is the following: user                                                                             ... See more...
Hi. How do I combine these two fields, since the username is similar? The result of my query is the following: user                                                                                                  EventID        count ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3                                                                         |   4740     |     3            | ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3                                                                         |   4767     |      3           |  ----------------------------------------------------------------- |------------|------------ | dsanchez.ext3@domain.com                                           |   4625     |    10          | ----------------------------------------------------------------- |------------|-------------| I would like the following: user                                                           EventID                                    count -----------------------------------------------------|------------------- |--------------| dsanchez.ext3                                                     |       4740             |        3           | dsanchez.ext3                                                     |       4767             |        3           | dsanchez.ext3@domain.com                       |      4625              |       10         | ----------------------------------------------------------------------------------------- My query is: index=oswinsec user=dsanchez* EventID=4625 OR EventID=4740 OR EventID=4767 |stats count by user, EventID
Hey everyone, This might be a bit of a silly question, but I've not seen it answered definitively and anyone I have asked regarding this also has not been able to advise. I am working on fixing a... See more...
Hey everyone, This might be a bit of a silly question, but I've not seen it answered definitively and anyone I have asked regarding this also has not been able to advise. I am working on fixing a deployment server and re-introducing the forwarder management to a Splunk environment, a previous iteration used it but oddly not the current one. And I was wondering, if I enable Forwarder Management will that cause any issues with already existed forwarders that have some custom stanza's in their inputs.conf (so resetting to a default state or to the state present on the deployment server). Or will that only take place when going through the process of getting server classes set? Cheers! 
Hi All,      I need to write regular expression for the below log to extract few fields. Can you please help me on that. Here is the log: {"log":"[14:38:36.117] [INFO ] [] [c.c.n.b.i.DefaultBu... See more...
Hi All,      I need to write regular expression for the below log to extract few fields. Can you please help me on that. Here is the log: {"log":"[14:38:36.117] [INFO ] [] [c.c.n.b.i.DefaultBusinessEventService] [akka://MmsAuCluster/system/sharding/notificationEnrichmentBpmn/0/oR6fulqKQOmr0axiUzCI2w_10/oR6fulqKQOmr0axiUzCI2w] - method=prepare; triggerName=creationCompleted, entity={'id'='2957b3205bf211ed8ded12d15e0c927a_1972381_29168b705bf211ed8ded12d15e0c927a','eventCode'='MDMT.MANDATE_CREATION_COMPLETED','paymentSystemId'='MMS','servicingAgentBIC'='null','messageIdentification'='2957b3205bf211ed8ded12d15e0c927a','businessDomainName'='Mandate','catalogCode'='MDMT','functionCode'='MANDATE_CREATION_COMPLETED','eventCodeDescription'='Mandate creation request completed','subjectEntityType'='MNDT','type'='MSG_DATA','dataFormat'='JSON','dataEncoding'='UTF-8','requestBody'='null''responseBody'='class ChannelNotification3 { mmsServicerBic: CTBAAUSNBKW trigger: MCRT priority: NORM mandateIdentification: 29168b705bf211ed8ded12d15e0c927a bulkIdentification: null reportIdentification: null actionIdentification: 2916b2805bf211ed8ded12d15e0c927a portingIdentification: null actionExpiryTime: null resolutionRequestedBy: null bulkItemResult: null }'} \n","stream":"stdout","docker":{"container_id":"1cbf6fee4ccb236146b7d66fd2f60e4d47c89012fba7679083141eb9a5342a94"},"kubernetes":{"container_name":"mms-au","namespace_name":"msaas-t4","pod_name":"mms-au-b-1-67d78896c6-c5t7s","container_image":"pso.docker.internal.cba/mms-au:2.3.2-0-1-ff0ef7b23","container_image_id":"docker-pullable://pso.docker.internal.cba/mms-au@sha256:cd39a1f76bb50314638a4b7642aa21d7280eca5923298db0b07df63a276bdd34","pod_id":"f649125d-2978-41ea-908f-f99aa84134f3","pod_ip":"100.64.85.236","host":"ip-10-3-197-109.ap-southeast-2.compute.internal","labels":{"app":"mms-au","dc":"b-1","pod-template-hash":"67d78896c6","release":"mms-au"},"master_url":"https://172.20.0.1:443/api","namespace_id":"48ee871a-7e60-45c4-b0f4-ee320a9512f5","namespace_labels":{"argocd.argoproj.io/instance":"appspaces","ci":"CM0953076","kubernetes.io/metadata.name":"msaas-t4","name":"msaas-t4","platform":"PSU","service_owner":"somersd","spg":"CBA_PAYMENTS_TEST_COORDINATION"}},"hostname":"ip-10-3-197-109.ap-southeast-2.compute.internal","host_ip":"10.3.197.109","cluster":"nonprod/pmn02"} I need to extract fields called event code,trigger,mmsservicerbic - these 3 are highlighted above.as those are in different format and under log sub field i am not able to write. Can anyone help please Thanks in Advance
I'm pulling in events from the journal of a number of Linux hosts using the journald modular input. I'm seeing truncated events every so often and, when I look at the length of _raw, I see that it'... See more...
I'm pulling in events from the journal of a number of Linux hosts using the journald modular input. I'm seeing truncated events every so often and, when I look at the length of _raw, I see that it's always 4088 bytes. The man page for journalctl (https://www.freedesktop.org/software/systemd/man/journalctl.html) says that when events are outputted using JSON format, that "Fields larger than 4096 bytes are encoded as null values. (This may be turned off by passing --all, but be aware that this may allocate overly long JSON objects.)" I'm presuming that that's what happening with the truncated events that I'm seeing.   Is anyone aware of a way around this?  I can't see any configuration setting associated with the journald modular input that would let me enable the '--all' flag. FWIW, I'm running Splunk Enterprise 9.0.2
Hello Splunkers , I am using the following search which outputs the following fields   host ,Component  and output and then it compares with a lookup file (below) which has fields Component  and ou... See more...
Hello Splunkers , I am using the following search which outputs the following fields   host ,Component  and output and then it compares with a lookup file (below) which has fields Component  and output .I filter the results by using | |where mvcount(from)=1 AND from="search" and I get correct results(attached screenshot) but i want to add another column from lookup to show what is in lookup and what is different in search .For instance in the attached screenshot for GPU0 the output value is 96.00.2F.00.06  which is different in lookup...I want to show the output value for that GPU0 of the lookup beside the output column   Components output BMC and AUX 22.10 0x12 CPLD 00 00 46 SBIOS version 0.2 nvme9 EPK9CB5Q nvme8 EPK9CB5Q nvme7 EPK9CB5Q nvme6 EPK9CB5Q nvme5 EPK9CB5Q nvme4 EPK9CB5Q nvme3 EPK9CB5Q nvme2 EPK9CB5Q nvme1 EPK9CB5Q nvme0 EPK9CB5Q GPU0 96.00.39.00.08 GPU1 96.00.39.00.08 GPU2 96.00.39.00.08 GPU3 96.00.39.00.08 GPU4 96.00.39.00.08 GPU5 96.00.39.00.08 GPU6 96.00.39.00.08 GPU7 96.00.39.00.08   index=preos host IN(*) *CPLD* OR BMC OR SBIOS OR *nvme* OR "*GPU* PCISLOT*" host=* | rex field=_raw "log-inventory.sh\[(?<id>[^\]]+)\]\:\s*(?<Component>[^\:]+)\:\s*(?<Hardware_Details>.*)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*CPLD\:\s*(?<Hardware>[^.*]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*BMC\:\s*version\:\s*(?<Hardware1>[^\,]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*SBIOS\s*version\:\s*(?<Hardware2>[^ ]+)" | rex field=_raw "log-inventory.sh\[\d*\]\:\s*nvme\d*\:.*FW\:\s*(?<Hardware3>[^ ]+)" | rex field=_raw "VBIOS\:\s*(?<Hardware4>[^\,]+)" | eval output = mvappend(Hardware, Hardware1,Hardware2,Hardware3,Hardware4) | replace BMC WITH "BMC and AUX" in Component | table Component output host _time | sort Component | dedup Component | fields - _time | eval from="search" | append [| inputlookup component.csv | table Component output | eval from="lookup"] | stats values(from) as from values(host) as host by Component output | where mvcount(from)=1 AND from="search"    
Hi,  I have a dataset like below     [ {classificationA: null, classificationB: null}, {classificationA: {name: 'Education'}, classificationB: {name: 'Education'}}, {classificationA: {name:... See more...
Hi,  I have a dataset like below     [ {classificationA: null, classificationB: null}, {classificationA: {name: 'Education'}, classificationB: {name: 'Education'}}, {classificationA: {name: 'IT'}, classificationB: {name: 'IT'}} }     My aim is to find all the rows whose classificationA is not equals to classificationB. So given the above dataset, it should return zero rows. I thought it should be:     | where classficationA != classficationB     But it is not working.  Anyone can help? Thank you!
Hey Guys, I am working on a requirement where I have to extract the value of some nodes in XML which are in a name value pair. Those values are nothing but the products purchased from our website b... See more...
Hey Guys, I am working on a requirement where I have to extract the value of some nodes in XML which are in a name value pair. Those values are nothing but the products purchased from our website but the XML also contains other elements like discount code, additional products etc and I don't have a way to differentiate between those. I have attached a block of code from my XML which I am really interested in. Is there a way to extract those values and then display the top products purchased/used from different events. I am interested in the 2 fields  productIdentifier and name     <ns3:orderItem> <ns3:product> <ns3:productIdentifier>XXXXXXXXXXXXXXX</ns3:productIdentifier> <ns3:name>XXXXX</ns3:name> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> <ns3:product> <ns3:productIdentifier>P11845546565263</ns3:productIdentifier> <ns3:name>Mixit TV (M 2)</ns3:name> <ns3:instanceIdentifier>A</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> <ns3:product> <ns3:productIdentifier>P1187877564259</ns3:productIdentifier> <ns3:name>360 Box</ns3:name> <ns3:instanceIdentifier>A</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> <ns3:product> <ns3:productIdentifier>P118565565656</ns3:productIdentifier> <ns3:name>360 Activation omph</ns3:name> <ns3:instanceIdentifier>A</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> </ns3:product> <ns3:product> <ns3:productIdentifier>P1068434343545681</ns3:productIdentifier> <ns3:name>Fibre Broadband</ns3:name> <ns3:instanceIdentifier>H</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> <ns3:product> <ns3:productIdentifier>P1046134534341</ns3:productIdentifier> <ns3:name>Manned Install Only code</ns3:name> <ns3:instanceIdentifier>H</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> <ns3:product> <ns3:productIdentifier>P1015455566454</ns3:productIdentifier> <ns3:name>Manned Install Charge</ns3:name> <ns3:instanceIdentifier>H</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> </ns3:product> <ns3:product> <ns3:productIdentifier>P1243436565434</ns3:productIdentifier> <ns3:name>Weekend chatter</ns3:name> <ns3:instanceIdentifier>I</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> <ns3:directoryServicesRequest> <ns3:includePhoneNumber>false</ns3:includePhoneNumber> </ns3:directoryServicesRequest> <ns3:product> <ns3:productIdentifier>A1000546567565</ns3:productIdentifier> <ns3:name>Voicemail Free</ns3:name> <ns3:instanceIdentifier>I</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> <ns3:product> <ns3:productIdentifier>P10454565656545</ns3:productIdentifier> <ns3:name>VOC Line Rental</ns3:name> <ns3:instanceIdentifier>I</ns3:instanceIdentifier> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> </ns3:product> <ns3:product> <ns3:productIdentifier>D1057845454545</ns3:productIdentifier> <ns3:name>Free Install - Non QS address</ns3:name> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> <ns3:product> <ns3:productIdentifier>P105704545458</ns3:productIdentifier> <ns3:name>Install Activation Fee</ns3:name> <ns3:action>New</ns3:action> <ns3:quantity>1</ns3:quantity> </ns3:product> </ns3:product> </ns3:orderItem>       Let me know if anyone has worked on this type of requirement before and if they can be of any help. Best Regards, SA
Hello, Can anyone pls help me with the cisco add-on for splunk that collects latency info from cisco devices.I am seeing bunch of add-on's on splunkbase but can't exactly figure out. Please help ... See more...
Hello, Can anyone pls help me with the cisco add-on for splunk that collects latency info from cisco devices.I am seeing bunch of add-on's on splunkbase but can't exactly figure out. Please help with your thoughts.   Thanks
Hi,  We have recently switched from Phantom to SOAR and I'm trying to send our triggered alerts to SOAR.  I have tested that from Splunk Enterprise to SOAR connect and it works. But I keep gett... See more...
Hi,  We have recently switched from Phantom to SOAR and I'm trying to send our triggered alerts to SOAR.  I have tested that from Splunk Enterprise to SOAR connect and it works. But I keep getting the following error for one alert       11-04-2022 05:31:21.724 +1100 WARN sendmodalert [17285 AlertNotifierWorker-0] - action=sendtophantom - Alert action script returned error code=1 11-04-2022 05:31:21.724 +1100 INFO sendmodalert [17285 AlertNotifierWorker-0] - action=sendtophantom - Alert action script completed in duration=1394 ms with exit code=1        
Is it possible to give a user two options when drilling down on a panel? For example the dashboard has a table with one column A. If a user clicks that it updates the token in the dashboard.  But no... See more...
Is it possible to give a user two options when drilling down on a panel? For example the dashboard has a table with one column A. If a user clicks that it updates the token in the dashboard.  But now when a user clicks the value in column A a pop up occurs that gives the user the option of updating the current dashboard or going to another dashboard.
can anyone help me to resolve my issue? here is the query which i am using    index="dynatrace" "userActions{}.name" = "clickonnotes" | table "userActions{}.name","userActions{}.visuallyCompleteT... See more...
can anyone help me to resolve my issue? here is the query which i am using    index="dynatrace" "userActions{}.name" = "clickonnotes" | table "userActions{}.name","userActions{}.visuallyCompleteTime"   output userActions{}.name userActions{}.visuallyCompleteTime   loadingofpage/cc/claimcenter.do clickonsearch keypressonc1 clickony3wc25120 clickonnotes clickonlossdetails clickonindemnity 9356 516 609 1276 981 1371 392 640
I want to display the output in a table format. Basically I have a list of responses values fields that I want to printout, but only if they have something in them. I don't want to routinely disp... See more...
I want to display the output in a table format. Basically I have a list of responses values fields that I want to printout, but only if they have something in them. I don't want to routinely display 10 extra fields that are usually with empty.
I have a dashboard with tokens on it: Token1, Token2, and Token3. I have a table that contains multiple columns: x, y, z. When I click the value of column x, I want to update Token1 on the same p... See more...
I have a dashboard with tokens on it: Token1, Token2, and Token3. I have a table that contains multiple columns: x, y, z. When I click the value of column x, I want to update Token1 on the same page. When I click the value of column y, I want to update Token2 and so on. When I click a column I only want to update the corresponding token and not the rest of the tokens.