All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

H @dwraesner , I usually use the Partner Portal, but I know that you can also send an email to support@splunk.com. Ciao. Giuseppe
Hi @LearningGuy , this means thta you have much time to spent in front of your pc! obviously I'm jocking! this isn't a good idea: usually the approach is the opposite: use an external DB to take s... See more...
Hi @LearningGuy , this means thta you have much time to spent in front of your pc! obviously I'm jocking! this isn't a good idea: usually the approach is the opposite: use an external DB to take static data to store in a lookup or in an index, because the data extractions from a db are usually very slow. Ciao. Giuseppe
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup  ... See more...
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup   index=student_grade | lookup student_info.csv No AS No OUPUTNEW Name Address   Below is my "future" search query using DBXLookup Is it going to be this simple? Please share  your experience.  Thank you so much   index=student_grade | dbxlookup connection="studentDB" query="SELECT * FROM student_info" No AS No OUTPUT Name, Address   index=student_grade No Class Grade 10 math A 10 english B 20 math B 20 english C student_info.csv No Name Address 10 student10 Address10 20 student20 Address20   No Class Grade Name Address 10 math A student10 Address10 10 english B student10 Address10 20 math B student20 Address20 20 english C student20 Address20
Not working still. I am thinking that its not even recognizing the token value. Anything wrong with the way I am defining the token or maybe I need to initialize it before it can be referenced?
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recov... See more...
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recovering.  The addon can't start the input again, because the input reports that the PID is already running. Add-on is also upgraded to version: Qualys Technology Add-on (TA) for Splunk | Splunkbase Version: 1.11.4 data inputs having issues: host detection, policy_posture_info Please suggest to fix the issue.   TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] ERROR: Another instance of policy_posture_info is already running with PID 2*****. I am exiting. TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] INFO: Earlier Running PID: 2*****        
Hi All, hope you are having a great day, I have a quick question. I have the data given as below, how do i extract just the first value if attribute newValue (in our Eg., its "None"), first value of ... See more...
Hi All, hope you are having a great day, I have a quick question. I have the data given as below, how do i extract just the first value if attribute newValue (in our Eg., its "None"), first value of newValue keeps changing so cannot be hard-coded. ```{}``` { targetResources: [       {         displayName: null        groupType: null        id: f61b1166        modifiedProperties: [           {             displayName: PasswordPolicies            newValue: ["None"]                                             // extract only this value            oldValue: ["DisablePasswordExpiration"]          }          {             displayName: Included Updated Properties            newValue: "PasswordPolicies"            oldValue: null          }          {             displayName: TargetId.UserType            newValue: "Member"            oldValue: null          }        ] }
Thank you that is something that i will use if I cannot find anything that would actually do what i need. the issue is that the lookup file column is one word while the sql field would be many chara... See more...
Thank you that is something that i will use if I cannot find anything that would actually do what i need. the issue is that the lookup file column is one word while the sql field would be many characters. the example I gave provided a structure wherein you could use rex however, in my real life data, there is no structure, for example, : scbt_owner could be found as "scbt" or "scbt_owner" or " as scbt" or "where scbt" if it were those 4 examples i gave, then yes i would be able to use rex but they might be different text in the SQL column. basically, the issue is that i would like to use the lookup file lk_wlc_app_short , to do a "in" the sql field. so i would use the lookup file as a base, and if any of the text in the lookup file match the sql field, i would flag them as a match and I would be able to get the final table output I want. I am not sure if splunk can do this, I know it can do a match if both the sql field and the lk_wlc_app_short field are the same (as you gave me in your example), but can Splunk be able to determine which rows of the lookup file match the SQL field without having to parse with rex since i know the sql field text would be random?
I would like to create a table with the following conditions Can someone please tell me how to do this? 1. Log display is limited to MAX 20 rows. 2. Logs with more than 20 lines are not displayed ... See more...
I would like to create a table with the following conditions Can someone please tell me how to do this? 1. Log display is limited to MAX 20 rows. 2. Logs with more than 20 lines are not displayed (not even after the second page). 3. If the logs are less than 20 lines, adjust the size of the TABLE to fit the table of logs. *If I set [head 10] in the query, there will be an extra row left in the table if there is one log. I would like to eliminate this.
If this is from Precisely, ask them.  It is possible that their QA missed something.
Hello Giuseppe, Tried opening a non-technical case via the Support Portal, but was directed to where I needed to enter a support entitlement number to open any case. Then called the support number ... See more...
Hello Giuseppe, Tried opening a non-technical case via the Support Portal, but was directed to where I needed to enter a support entitlement number to open any case. Then called the support number directly, but the automated menu system put me a loop over and over so that I never reached a support person to talk with. Couldn't figure out how to open a non-technical ticket with Splunk Support. Any specifics on how to do that ? Best regards, Dennis
Yep works. Set your id= <input type="link" token="unused" id="resized_input" searchWhenChanged="true"> Then style it <row depends="$HIDE_ALWAYS$"> <panel> <title>Hidden panel for a hori... See more...
Yep works. Set your id= <input type="link" token="unused" id="resized_input" searchWhenChanged="true"> Then style it <row depends="$HIDE_ALWAYS$"> <panel> <title>Hidden panel for a horizontal linked list</title> <html> <style> #resized_input { width: 350px; } </style> </html> </panel> </row>  
That appears to be what I need. Thanks much.
Try something like this | stats count as c by foo | sort - c | eventstats count as foos | eval binsize=ceil(foos/10) | streamstats count as row | eval foobin=1+floor((row-1)/binsize) | chart sum(c) ... See more...
Try something like this | stats count as c by foo | sort - c | eventstats count as foos | eval binsize=ceil(foos/10) | streamstats count as row | eval foobin=1+floor((row-1)/binsize) | chart sum(c) by foobin
Yes, I do realize that my question isn't very well-formed. Let me provide an example, using 2 bins, instead of 10 for brevity: Suppose my data has 10 lines where foo="A", 9 lines where foo="B", 8 li... See more...
Yes, I do realize that my question isn't very well-formed. Let me provide an example, using 2 bins, instead of 10 for brevity: Suppose my data has 10 lines where foo="A", 9 lines where foo="B", 8 lines where foo="C", and 7 lines where foo="D". Then | stats count as c by foo | sort -c should output foo c A 10 B 9 C 8 D 7 What I want from the chart+bins command (for example) is something like this: bins sum bin1 19 bin2 15  ...where bin1 is formed from A and B, since they have the top two c values and bin2 is C and D, as the least, and the sum value is sum(c).
It isn't clear what your expected result would look like - having said that, does this help? | stats count as c by foo | sort - c | transpose 0 header_field=foo column_name=count
I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10 ... See more...
I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10   "foo" is not numeric, so it automatically fails, but I don't want the bins to be determined by an intrinsic order on foo anyway: The bins should respect the order that comes from the sort command. Thus, in the chart, the first "bar" should represent the greatest decile of the "sort -c" command, the second, the second decile, and so on. I can't figure out how to wrangle an order into the chart command or otherwise make it respect the sort and use that order as the basis for the separations into bins. Am I barking up the wrong tree here?
The lookup reduces the iterations of the map command. In a real world scenario, I have a field called "dept" that lists one of ten departments for each result. The map command only needs to iterate t... See more...
The lookup reduces the iterations of the map command. In a real world scenario, I have a field called "dept" that lists one of ten departments for each result. The map command only needs to iterate through each one (ten times total), so the output lookup saves off the data, then the stats separates each dept, and the map iterates through.
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially co... See more...
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially configured correctly at least.   I'm not getting logs from /var/log/kubernetes/audit/ nor from /var/log/audit/ as I believe I have configured in the below values file.   I am not getting logs from the containers that begin with `audit*` to any index, let alone what I'd expect from the transform processor statement here:   set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*")      The full values file is below, though I think the formatting looks better in the github gist    splunk-otel-collector: clusterName: ${env:CLUSTER_NAME} priorityClassName: "system-cluster-critical" splunkPlatform: # sets Splunk Platform as a destination. Use the /services/collector/event # endpoint for proper extraction of fields. endpoint: wheeeeee token: "fake-placeholder-token" index: "k8s" # should be able to replace with "" to dynamically set index as was done with SCK but this chart does not allow logsEnabled: true secret: create: false name: fake-credentials validateSecret: false logsCollection: containers: enabled: true excludePaths: - /var/log/containers/*fluent-bit* - /var/log/containers/*speaker* - /var/log/containers/*datadog* - /var/log/containers/*collectd* - /var/log/containers/*rook-ceph* - /var/log/containers/*bird* - /var/log/containers/*logdna* - /var/log/containers/*6c6f616462616c2d* - /var/log/containers/*lb-6c6f616462616c2d* # extraOperators: # - type: copy # # Copy the name of the namespace associated with the log record. # from: resource["k8s.namespace.name"] # # Copy to the index key, so the record will be ingested under the index named after the k8s namespace. # to: resource["com.splunk.index"] extraFileLogs: filelog/kube-audit: # sck logs go to audit-kube index, but got it in otel index for now. include: - /var/log/kubernetes/audit/kube-apiserver-audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-kube com.splunk.sourcetype: kube:apiserver-audit com.splunk.source: /var/log/kubernetes/audit/kube-apiserver-audit.log filelog/linux-audit: include: - /var/log/audit/audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-linux com.splunk.sourcetype: linux:audit com.splunk.source: /var/log/audit/audit.log # can't find these results for SCK yet extraAttributes: fromLabels: - key: k8s.pod.labels.cluster.name tag_name: cluster_name from: pod - key: k8s.namespace.labels.cluster.class tag_name: cluster_class from: namespace - key: k8s.namespace.labels.cluster.env from: namespace - key: k8s.node.name tag_name: host from: node agent: enabled: true config: processors: # add cluster metadata to each logged event # these are pulled in as environment variables due to a limitation # as helm is unable to use templating when specifying values. attributes/cluster_name_filter: actions: - key: cluster_name action: upsert value: ${env:CLUSTER_NAME} attributes/cluster_class_filter: actions: - key: cluster_class action: upsert value: ${env:CLUSTER_CLASS} attributes/cluster_env_filter: actions: - key: cluster_env action: upsert value: ${env:CLUSTER_ENV} transform/namespace_to_index: error_mode: ignore log_statements: - context: log statements: - set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*") - set(resource.attributes["com.splunk.index"], resource.attributes["k8s.namespace.name"]) # attributes/namespace_filter: # actions: # - key: com.splunk.index # action: upsert # value: k8s.namespace.name # - key: logindex # action: delete exporters: debug: verbosity: detailed service: pipelines: logs: processors: - memory_limiter - k8sattributes - filter/logs - batch - resourcedetection - resource - resource/logs - attributes/cluster_name_filter - attributes/cluster_class_filter - attributes/cluster_env_filter - transform/namespace_to_index # - attributes/namespace_filter receivers: kubeletstats: metric_groups: - node - pod - container filelog: include: - /var/log/pods/*/*/*.log - /var/log/kubernetes/audit/*.log - /var/log/audit/audit*.log start_at: beginning include_file_name: false include_file_path: true operators: # parse cri-o format - type: regex_parser id: parser-crio regex: '^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout_type: gotime layout: '2006-01-02T15:04:05.999999999Z07:00' # Parse CRI-Containerd format - type: regex_parser id: parser-containerd regex: '^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: copy from: resource["k8s.namespace.name"] to: resource["com.splunk.index"] # Set Environment Variables to be set on every Pod in the DaemonSet # Many of these are used as a work-around to include additional log metadata # from what is available in `.Values` but inaccessible due to limitations of # Helm. extraEnvs: - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_NAME - name: CLUSTER_CLASS valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_CLASS - name: CLUSTER_ENV valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_ENV # The container logs may actually be a series of symlinks. In order to read # them, all directories need to be accessible by the logging pods. We use # volumes and volume mounts to achieve that. extraVolumes: - name: containerdlogs hostPath: path: /var/lib/containerd/pod-logs - name: podlogs hostPath: path: /var/log/pods - name: varlogcontainers hostPath: path: /var/log/containers - name: kubeauditlogs hostPath: path: /var/log/kubernetes/audit - name: linuxauditlogs hostPath: path: /var/log/audit extraVolumeMounts: - name: containerdlogs mountPath: /var/lib/containerd/pod-logs readOnly: true - name: podlogs mountPath: /var/log/pods readOnly: true - name: varlogcontainers mountPath: /var/log/containers readOnly: true - name: kubeauditlogs mountPath: /var/log/kubernetes/audit readOnly: true - name: linuxauditlogs mountPath: /var/log/audit readOnly: true resources: limits: cpu: 1 memory: 4Gi requests: cpu: 1 memory: 1Gi    
The AWS search heads can service the on-prem system, not as search peers, but as Federated Search (FS) providers.  FS allows one Splunk environment (on-prem, in this example) to query another (AWS) a... See more...
The AWS search heads can service the on-prem system, not as search peers, but as Federated Search (FS) providers.  FS allows one Splunk environment (on-prem, in this example) to query another (AWS) and include those results as part of a local search.  You can read more about FS at https://docs.splunk.com/Documentation/Splunk/latest/FederatedSearch/fsoptions Never put a load balancer in a network path that uses the Splunk-to-Splunk protocol.  LBs don't know that protocol and can't be relied on to manage the connections correctly.  Put all of the search peers in the servers= line of distsearch.conf or use Indexer Discovery.
Hey I know this is old but did you ever figure this out i am getting the same errors. Thanks!