All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need help to find a way to search JSON strings where an attributes is empty. Ex:  get all JSON data where 'tags' is empty.   Appreciate your help.   { "configurationItemCaptureTime":"2021-04-0... See more...
Need help to find a way to search JSON strings where an attributes is empty. Ex:  get all JSON data where 'tags' is empty.   Appreciate your help.   { "configurationItemCaptureTime":"2021-04-09T00:40:20.800Z", "relationships":[ ], "configurationStateId":3456, "configurationItemVersion":"1.3", "tags":{ }, "awsAccountId":"12345678", "supplementaryConfiguration":{ }, "ARN":"arn:aws:ec2:us-east-1:4567:instance/i-0b8973444340", "configurationStateMd5Hash":"", "awsRegion":"us-east-1", "resourceId":"i-0343434345b26510", "configurationItemStatus":"ResourceDeleted", "resourceType":"AWS::EC2::Instance", "relatedEvents":[ ] }          
I am using Splunk for a class. We are creating our own security dashboard as an assignment. One of the tiles we are looking to create is "Email (Note: This panel is for collection and analysis, not f... See more...
I am using Splunk for a class. We are creating our own security dashboard as an assignment. One of the tiles we are looking to create is "Email (Note: This panel is for collection and analysis, not for composing/sending email.)" I have googled and tried finding something with absolutely no luck. Any idea on what to search to pull a log that I could turn into a dashboard? Thanks in advance! B
I am trying to figure out how to italicize text in a Splunk dashboard. I've done this using the HTML Tags and it works but it's not giving me what I want. Is there a way to italicize text using simp... See more...
I am trying to figure out how to italicize text in a Splunk dashboard. I've done this using the HTML Tags and it works but it's not giving me what I want. Is there a way to italicize text using simple XML?   Thank you, Marco
Hey Splunkers! i've inherited a Splunk deployment that utilizes multiple heavy forwarders that uses TCP Routing and several tcpout groups to send data to different locations. One of the issues i'm ru... See more...
Hey Splunkers! i've inherited a Splunk deployment that utilizes multiple heavy forwarders that uses TCP Routing and several tcpout groups to send data to different locations. One of the issues i'm running into is most of the orginal team that deployed has left and there is little documentation. What is the best way to troubleshoot some of these flows?    Right now i'm using tcpdump on the specific inputs/outputs ports and using that to built myself a visio diagram of the flows. Is this the best approach or are there other utilities I could use? I'm currently running 7.3.3
Hi Folks,   I am working on creating an alert for endpoint where we have to check if its service came up after it got stop during upgrade. For eg: - Host A service stop and didnt came up for 3 day... See more...
Hi Folks,   I am working on creating an alert for endpoint where we have to check if its service came up after it got stop during upgrade. For eg: - Host A service stop and didnt came up for 3 days then an alert will be triggered. Also we can compare on basis of filedname state=Stopped and state=Running, but host should be same Please let me know if there is any way we can create alert for this scenario Thanks,
Hi people, First of all I'm still newbie with Splunk, but I'm trying to extract fields from a JSON sent by the Admin Report API from Google and I'm having trouble.  Here is a sample JSON:     {"... See more...
Hi people, First of all I'm still newbie with Splunk, but I'm trying to extract fields from a JSON sent by the Admin Report API from Google and I'm having trouble.  Here is a sample JSON:     {"kind": "admin#reports#activity", "id": {"time": "2021-04-08T19:21:27.595Z", "uniqueQualifier": "-0987654321", "applicationName": "chat", "customerId": "C0123456A"}, "etag": "\"giant/string\"", "actor": {"callerType": "USER", "email": "mail@acme.com", "profileId": "1234567890"}, "events": [{"type": "user_action", "name": "message_posted", "parameters": [{"name": "room_id", "value": "ABCDEFGH"}, {"name": "timestamp_ms", "value": "1617909687595199"}, {"name": "actor", "value": "mail@acme.com"}, {"name": "message_id", "value": "ZYXWVUTS"}, {"name": "retention_state", "value": "PERMANENT"}, {"name": "room_name", "value": ""}]}]}     What I want is to create some counts/dashboards with fields from "parameters" along with others, but they all have the same name in this nest. This is what I came with, but all the "name" and "value" strings are grouped:     | spath path=events{}. output=events | mvexpand events | rename events as _raw | extract | rename parameters{}.name as pname, parameters{}.value as pvalue | table _time pname pvalue     Also I don't know if this can be done with the search parameters only (sorry If I'm talking nonsense). Thanks
We have several log entries with what boils down to a transaction ID. I want to group these entries and within those groups, I want to create an alert that will trigger if a phrase shows up 10 or mor... See more...
We have several log entries with what boils down to a transaction ID. I want to group these entries and within those groups, I want to create an alert that will trigger if a phrase shows up 10 or more times in 5 min. Below is what I currently have, but I don't think I am headed in the right direction.   index=index_name host=server_name sourcetype=moveit:fti* | transaction tasknumer maxspan=300s | search "could not get id"
How can I use lookup csv from another Index? I have access to both index. Thanks.
McAfee data ingested into Splunk not parsing correctly. How do I fix it? I am getting a lot of junk. Please advise how I make the data readable?
We  have two index clusters, one has 5 indexers (cluster 2) and the other had 100+ indexers (cluster 1).  We want to move the 5 indexers (physical servers) to the larger cluster (cluster 1) and disa... See more...
We  have two index clusters, one has 5 indexers (cluster 2) and the other had 100+ indexers (cluster 1).  We want to move the 5 indexers (physical servers) to the larger cluster (cluster 1) and disable the smaller cluster master (cluster 2).  Is there any way to do this without moving each index separately. All of our Search head clusters can currently see both index clusters.  We are on linux and Splunk 7.3.5 soon to upgrade to Splunk 8.
I have a really simple query that I'd like to join with Enterprise Security's Identity data. In this case, simply grab the user from a Palo Alto system log, cross reference the user with ES Identity... See more...
I have a really simple query that I'd like to join with Enterprise Security's Identity data. In this case, simply grab the user from a Palo Alto system log, cross reference the user with ES Identity lookup and grab the priority field for that user.  Simple right?? Here is the SPL I've tried: index=pan sourcetype="pan:system" log_subtype=globalprotect description IN ("GlobalProtect gateway client configuration generated*") | join type=left user [ |inputlookup es_identity_lookup | search identity=user | fields priority ] | table _time user priority   But nothing populates the priority field.  Also tried:    index=pan sourcetype="pan:system" log_subtype=globalprotect description IN ("GlobalProtect gateway client configuration generated*") | lookup es_identity_lookup identity AS user OUTPUT priority | table _time user priority But this doesn't even run. Throws an error.    Any help here would be most appreciated! Thanks in advance.       
How do I find the versions of all my UFs & HFs and dates of install on Splunk Enterprise?
How do I find a list of scheduled, saved searches in ES, specially the ones that run in real time? Can the Monitoring console be used for this purpose if yes, how please?
Hello Everyone, Doing some Splunk Maintainance today, working through issues.  This is one that has me stumped.  Comes up when I restart splunk.  Splunk is still running though. Can anyone help me ... See more...
Hello Everyone, Doing some Splunk Maintainance today, working through issues.  This is one that has me stumped.  Comes up when I restart splunk.  Splunk is still running though. Can anyone help me on the location of this error, so I can fix it?     Thanks
Hi,   I have the following CSV data that I've uploaded into Splunk   iso_code,continent,location,date,total_cases USA,North America,United States,2020-01-22,1.0 USA,North America,United States,20... See more...
Hi,   I have the following CSV data that I've uploaded into Splunk   iso_code,continent,location,date,total_cases USA,North America,United States,2020-01-22,1.0 USA,North America,United States,2020-01-23,1.0 USA,North America,United States,2020-01-24,2.0 USA,North America,United States,2020-01-25,2.0     My props.conf is as below   [csv-c1] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = date TIME_FORMAT = %Y-%m-%d category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Now, when I perform the search on this data the events are listed with the expected _time values But when I try to create a table with _time field, it appears as if Splunk treats the _time field as a string and not as a date object. Notice that the chronological order of the dates is lost as well as the HH:MM::SS part. Can someone please point out why Splunk is behaving this way ? Surprisingly if I index the same data with the current timestamp, everything works fine and _time does not lose its chronological order or its HH:MM:SS part its displayed using the table command  
With the below query I'm trying to sort dateTime by descending order but the sorting is not working, could someone please help me to identify the issue in the query .   hostname="alt*" [search "Sta... See more...
With the below query I'm trying to sort dateTime by descending order but the sorting is not working, could someone please help me to identify the issue in the query .   hostname="alt*" [search "Starting Batch job" AND hostname="alt*" UUID=* | stats values(UUID) as uuid by UUID | fields UUID] | regex "JOB Execution*" |stats values(@timestamp) as dateTime,values(UUID) as uuid,values(message) as message | sort - dateTime | table dateTime, uuid, message
Hello all, I want to add last updated date column in dashboard for KV store lookup. When I am using time () or now() and updating a single row, last updated date get changed for all the records ava... See more...
Hello all, I want to add last updated date column in dashboard for KV store lookup. When I am using time () or now() and updating a single row, last updated date get changed for all the records available in the table.   Kindly suggest how to update last updated date (rowwise) for the record update and not the entire table.   Thanks in advance!  
I have proxy logs, in which I am interested in 4 fields: the ip address of the user's computer, the category of the site he visits and the total amount of incoming and outgoing traffic for each categ... See more...
I have proxy logs, in which I am interested in 4 fields: the ip address of the user's computer, the category of the site he visits and the total amount of incoming and outgoing traffic for each category. Now I got such a table, but I needed it to be like in the second table on the screenshot. I just can't find a solution, I ask the SPL guru to help me))) My query: index=proxy  | stats sum(bytes_in) as totalBytesIn, sum(bytes_out) as totalBytesOut, values(category) by src_host
We have had Microsoft Azure Add-On 3.0.0 installed and running successfully. This was resolved, and a new Secret was generated in the Azure AD Portal, and configured into the Azure Add-On in Splun... See more...
We have had Microsoft Azure Add-On 3.0.0 installed and running successfully. This was resolved, and a new Secret was generated in the Azure AD Portal, and configured into the Azure Add-On in Splunk. But we are getting an error, seems to be token related. Have tried deleting and recreating the input, but doesn't seem to matter. The only thing changed was creating the new secret in the portal and then configuring the app to use it. It was previously working OK until the old secret expired. 2021-04-08 10:59:07,307 ERROR pid=3520 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_audit.py", line 84, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_MS_AAD_audit.py", line 61, in collect_events access_token = azauth.get_graph_access_token(client_id, client_secret, tenant_id, helper) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/auth.py", line 18, in get_graph_access_token raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/auth.py", line 16, in get_graph_access_token return _get_access_token(endpoint, helper, payload) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/auth.py", line 39, in _get_access_token raise e File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/auth.py", line 37, in _get_access_token return response['access_token'] KeyError: 'access_token'
Hi folks, i know the ways to ingest azure data to splunk. 1 way: Microsoft Graph Security Api Add-On for Splunk. ->You can work with the alerts what you get from the platform right? 2 way: MS A... See more...
Hi folks, i know the ways to ingest azure data to splunk. 1 way: Microsoft Graph Security Api Add-On for Splunk. ->You can work with the alerts what you get from the platform right? 2 way: MS Azure Add on for Splunk -> I get Azrue Ad Data, User Sign ins, Directory Audits and so on from the platform. 3 way:  Splunk Add-on for Microsoft Cloud Services -> Also a way, simial to the 2. way   What did you reccomend to create custom use cases / corelation rules with focus on Azure AD / Identiy? I think 2 or way 3.. Thx a lot and best regards, Philipp