All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It doesn't have to be whole dashboard, but it should at least match the visualisation you shared earlier, or, if it doesn't then share the part that isn't working for you (so we can try and test it, ... See more...
It doesn't have to be whole dashboard, but it should at least match the visualisation you shared earlier, or, if it doesn't then share the part that isn't working for you (so we can try and test it, or our solutions, for you).
Have you tried "Subject: $result.Level$ app in $result.APPDIRS$"?
https://docs.splunk.com/Documentation/Splunk/latest/Alert/EmailNotificationTokens#Result_tokens
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files ... See more...
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files everywhere. I need to figure out where it actually starts and sets it's paths. Specifically where SSL is chosen. Assume I have installed SOAR to /data   Thanks for any assistance!
Hi @ITWhisperer , Source code has huge lines so I am unable to paste it or attach as a file. Kindly advise.
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN... See more...
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN (APPDIRS), "PRODUCTION", "Non-production") | eval APPDIRS=mvjoin(APPDIRS, ",")   The idea is to discern the affected application-instance (there are multiple logs under each of the /app/instance/logs/) and then to determine, whether the instance is a production one or not. In the search-results all three new fields (APPDIR, APPDIRS, and Level) are populated as expected. But they don't show up in the e-mails. The "Subject: $Level$ app in $APPDIRS$" expands to mere "Subject:  app in ". Nor are the fields expanded in the body of the alert e-mail. Now, I understand, that event-specific fields -- like the singular APPDIR above -- cannot be expected to work in an alert. But the plural APPDIRS, as well as the Level, are aggregates, aren't they? What am I doing wrong, and how do I fix it?
spath will extract the fields
Hello,   Can someone help me in extracting the fields from this nested json raw logs?   {"eventVersion":"1.09","userIdentity":{"type":"AssumedRole","principalId":"AROAUDGMTGGHXY5YL2EW6:redloc... See more...
Hello,   Can someone help me in extracting the fields from this nested json raw logs?   {"eventVersion":"1.09","userIdentity":{"type":"AssumedRole","principalId":"AROAUDGMTGGHXY5YL2EW6:redlock","arn":"arn:aws:sts::281749434767:assumed-role/PrismaCloudRole-804603675133320192-member/redlock","accountId":"281749434767","accessKeyId":"ASIAUDGMTGGHRRR2WZT2","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAUDGMTGGHXY5YL2EW6","arn":"arn:aws:iam::281749434767:role/PrismaCloudRole-804603675133320192-member","accountId":"281749434767","userName":"PrismaCloudRole-804603675133320192-member"},"attributes":{"creationDate":"2024-04-09T05:58:35Z","mfaAuthenticated":"false"}}},"eventTime":"2024-04-09T12:43:01Z","eventSource":"athena.amazonaws.com","eventName":"ListWorkGroups","awsRegion":"us-west-2","sourceIPAddress":"52.52.50.152","userAgent":"Vert.x-WebClient/4.4.6","requestParameters":{"maxResults":50},"responseElements":null,"requestID":"59f0ad81-7607-40bb-a40b-eab3fad0fb7a","eventID":"4bc352ff-0cc5-49cb-9b0e-2784bffbb58f","readOnly":true,"eventType":"AwsApiCall","managementEvent":true,"recipientAccountId":"281749434767","eventCategory":"Management","tlsDetails":{"tlsVersion":"TLSv1.3","cipherSuite":"TLS_AES_128_GCM_SHA256","clientProvidedHostHeader":"athena.us-west-2.amazonaws.com"}} logSource: aws-controltower/CloudTrailLogs:o-bj312h8hh6_281749434767_CloudTrail_us-east-1 logSourceType: aws:cloudwatchlogs  
The old events cannot be searched because they're on the old volume.  Indexers have only one volume definition so they only know the current volume. Use OS tools to copy the directories from the old... See more...
The old events cannot be searched because they're on the old volume.  Indexers have only one volume definition so they only know the current volume. Use OS tools to copy the directories from the old volume to the new one then restart the indexers.
Hi @Paul.Gilbody , Can you share the solution here. I'm stuck with same issue
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test faile... See more...
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test failed 1 action failed Failed to connect to PHANTOM server. No route to host. Connectivity test failed i am facing this issue  i tried all the possible way
I would look at this, but unfortunately playbooks that were developed in 6.x will not load in 5.x
I have the same issue but these arguments are not set in the code? Same issue as OP is writing about. The table is shown if i select the classic dashboard, but not in studio..
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show even... See more...
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show events that are on this volume. How can I move old and existing events to this volume so I can search on them? Thank you.
Found my answer here Customize Incident Review in Splunk Enterprise Security - Splunk Documentation
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm cha... See more...
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, so while trying to change the initial admin credentials on all the instances, I face the following issue where all instance will be up and ready as Kubernetes pods for except the indexers where they will not start and remain in an error phase without any logs indicating the reason for this, so the following is a snippet of my values.yaml file which is being provided for the Splunk Enterprise chart:   sva: c3: enabled: true indexerClusters: - name: idx searchHeadClusters: - name: shc indexerCluster: enabled: true name: "idx" replicaCount: 3 defaults: splunk: hec_disabled: 0 hec_enableSSL: 0 hec_token: "test" password: "admintest" pass4SymmKey: "test" idxc: secret: "test" shc: secret: "test" extraEnv: - name: SPLUNK_DEFAULTS_URL value: "/mnt/splunk-defaults/default.yml"   Initially, I was not passing the "SPLUNK_DEFAULTS_URL", but after some debugging the "defaults" field will write in "/mnt/splunk-defaults/default.yml" only, and by default, all instances read from "/mnt/splunk-secrets/default.yml" so I had to change it, and so what happened admin password had changed on all Splunk instances to "admintest" but the issue is indexers pods would not start. Note: I tried to change the password by providing the "SPLUNK_PASSWORD" environment variable to all instances but the same behavior.
Yes. The [tcpout] defaultGroup setting tells your Splunk component what to do with events by default. So if you don't modify the _TCP_ROUTING field, your events should be going to the my_indexers gro... See more...
Yes. The [tcpout] defaultGroup setting tells your Splunk component what to do with events by default. So if you don't modify the _TCP_ROUTING field, your events should be going to the my_indexers group. But when you overwrite the _TCP_ROUTING with just distant_HF_formylogs, you'll be sending to that group only.
Ok I understand what you say. But sorry I forgot to mentionned that I have a TCPOUT default on my conf: [tcpout] defaultGroup = my_indexers forceTimebasedAutoLB = true forwardedindex.filter.disable... See more...
Ok I understand what you say. But sorry I forgot to mentionned that I have a TCPOUT default on my conf: [tcpout] defaultGroup = my_indexers forceTimebasedAutoLB = true forwardedindex.filter.disable = true [tcpout:my_indexers] server = indexer1:9997, indexer2:9997 So if I'am correct, the inputs.conf : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false Redirect the logs to the default outputs, because no outputs is specified . Correct me if I'm wrong, and sorry to forgot this config at the first question
this is the result:   I would expect a LOG field to be created for each event with the different values of its log1, log2, or logn.   Regular expression works (tested on 101), and other_trans... See more...
this is the result:   I would expect a LOG field to be created for each event with the different values of its log1, log2, or logn.   Regular expression works (tested on 101), and other_transforms_stanza does not apply to this field.
Correct.  I had a different user. Created an admin one and the error went away.