All Topics

Top

All Topics

Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid ... See more...
Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid type weight anomaly? 1 a 100kg   2 a 102kg   3 b 500kg   4 b 550kg   6 a 15kg yes 7 b 2500kg yes   One option would be solving this by calculating average and standard deviation:   index=products | stats list("productweight") as weights by "producttype" | mvexpand weights | eval weight=tonumber(weights) | eventstats avg(weight) as avg stdev(weight) as stdev by "producttype" | eval lowerBound=(avg-stdev*10), upperBound=(avg+stdev*10) | where weight < lowerBound OR weight > upperBound But I was wondering whether there is a way to solve this with the anomalydetection function. The function should search for anonalies within the products of the same producttype and not general for all weights on available.  Something like | anomalydetection by "producttype" but this option doesnt seem to be available. Does somebody know how to do this? Many thanks in advance for your help
Hello, I'm trying to new chart as calculate through packet count. I search with query for interface for several device. I could show as follow. _time interface-A Interface-B interface-C  ... See more...
Hello, I'm trying to new chart as calculate through packet count. I search with query for interface for several device. I could show as follow. _time interface-A Interface-B interface-C  9:00 100 200 100 9:10 150 250 100 9:20 200 300 100 I would like add Interface A+B-C for column as follow _time interface-A Interface-B interface-C  Interface A+B-C 9:00 100 200 100 200 9:10 150 250 100 300 9:20 200 300 100 400 How can I make it?  
We've run into a few occassions where one of our network devices stops sending logs to Splunk. I have a tstats search based on the blog post here: https://www.splunk.com/en_us/blog/tips-and-tricks/ho... See more...
We've run into a few occassions where one of our network devices stops sending logs to Splunk. I have a tstats search based on the blog post here: https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-determine-when-a-host-stops-sending-logs-to-splunk-expeditiously.html Here is the search expression I'm using: | tstats latest(_time) as latest where index=index_name earliest=-1d by host | eval recent = if(latest > relative_time(now(),"-15m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 My tstats search does return the hosts that have not sent any logs, but it never triggers when I use this search in an Alert. I noticed that the search only shows the hosts in the Statistics view and there are no Events. Is this why my event is not triggering? I've found several other examples on this forum of people using tstats to detect when a host stops sending logs. Is there something special they are configuring in their Alert to trigger off of the statistics results?
Hello, I'm using TrackMe Free Edition 2.0.92 on my test env (single instance with 2 UF on debian 11). I'm able to create vtenant, but I do not see any of them on the Vtenant page : Yet, they a... See more...
Hello, I'm using TrackMe Free Edition 2.0.92 on my test env (single instance with 2 UF on debian 11). I'm able to create vtenant, but I do not see any of them on the Vtenant page : Yet, they are listed in the configuration tab : I cannot access the "pop in" to manage any of the tenant specs. This behaviour was already in place with previous version of trackme v2. I checked logs, trackme logs, restarted the instance, updated the app, checked the browser logs (har files), removed then installed again the app, tried to remove banner, deactivated  then reactivated library restrictions, checked limits : all without success. I don't have any more ideas. My prod env is distributed and do not have the issue. I'm sure I did something wrong somewhere, but I cannot pinpoint where. Could you please suggest some leads ? Thanks, Ema
https://splunkbase.splunk.com/app/4564 Hi All, want to know the status on usage of particular app ,as we are seeing app being deprecated ,is there any alternate app/addon in leveraging the same func... See more...
https://splunkbase.splunk.com/app/4564 Hi All, want to know the status on usage of particular app ,as we are seeing app being deprecated ,is there any alternate app/addon in leveraging the same functionality. Current App stopped working  Regards Teja 
Hi everyone, I'm trying to forward Sysmon event logs from a Windows Server to Splunk with a Universal Forwarder installed on the Windows machines. I've successfully forwarded security event logs wit... See more...
Hi everyone, I'm trying to forward Sysmon event logs from a Windows Server to Splunk with a Universal Forwarder installed on the Windows machines. I've successfully forwarded security event logs with the same forwarder, so I'm confident there are no network connectivity issues. Sysmon events are created as expected and exist in the Event Viewer. In my setup, I'm sending Sysmon events from my Windows clients to a WEF server, which collects all the logs. This part works fine. My Splunk deployment is a single server deployed on Rocky Linux. I installed the Splunk UF with a network user account, so it should have access to any event log. When I try to add a new "Windows Event Logs" input, I only have options to choose from the following event channels: Application ForwardedEvents Security Setup System I've tried adding the input manually to the app in the file located at: /opt/splunk/etc/deployment-apps/_server_app_WindowsServers/local/inputs.conf Security logs are sent, but Sysmon logs are not. Here's the content of the file: [WinEventLog://Security] index = win_servers [WinEventLog://Microsoft-Windows-Sysmon/Operational] index = win_servers checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest renderXml = true I've tried various options following some tutorials, but nothing worked. I also tried copying the content of this file to $SPLUNK_HOME\etc\apps_server_app_WindowsServers on the Windows server with the UF, but the results are the same. Any insights into this issue would be greatly appreciated. I'm sure I'm missing something here. Thank you in advance, Yossi
Hi, I have set up deployment server. When I checked splunkd_access.log , it shows successful phonehome connection from Heavy Forwarder. I can also see app getting deployed in  deployment clients. ... See more...
Hi, I have set up deployment server. When I checked splunkd_access.log , it shows successful phonehome connection from Heavy Forwarder. I can also see app getting deployed in  deployment clients. But when I do ./splunk list deploy-clients, it is showing "No deployment clients have contacted this server". What is going wrong here ? Please can anyone of you help me. Regards, PNV
SPL Query: | getservice | search algorithms=*itsi_predict_* I want to extract the algorithms and then outputlookup the model_id of the model where recommended:True     Please sugges... See more...
SPL Query: | getservice | search algorithms=*itsi_predict_* I want to extract the algorithms and then outputlookup the model_id of the model where recommended:True     Please suggest how do I do thiS?      
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server a... See more...
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server and the apps which are not being logged are not directly located under /opt/splunk/etc/apps. Instead we do only have symbolic links to another folder on the system. It works for everything else, but the configuration tracker seems to ignore symbolic links. It is also not a permission issue of the linked folder. The linked folder has the same splunk group and permissions assigned.   /opt/splunk/etc/apps/symboliclinkapp     ->   /anotherfolder/symboliclinkapp   Is there an option to change the configuration tracker to also consider symbolic links?
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv ... See more...
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv | fields host | join type=inner host [search index=unix | stats latest(_time) as latest_time, latest(source) as source, latest(_raw) as event by host | convert ctime(latest_time) as latest_time] | table host, latest_time, source, event and it displays like this one: host latest_time source event hostname_a*       hostname_b*       hostname_c*       I assume that the wildcard "*" is acting like a literal string. I'm expecting results like this. host latest_time source event hostname_a12 test test test hostname_a23 test test test hostname_c123 test test test please help thanks!
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how... See more...
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how to troubleshoot this issue? Thanks!
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup  ... See more...
Hello, I am currently correlating an index with CSV file using lookup. I am planning to move CSV file to database and will replace lookup with dbxlookup. Below is my search query using lookup   index=student_grade | lookup student_info.csv No AS No OUPUTNEW Name Address   Below is my "future" search query using DBXLookup Is it going to be this simple? Please share  your experience.  Thank you so much   index=student_grade | dbxlookup connection="studentDB" query="SELECT * FROM student_info" No AS No OUTPUT Name, Address   index=student_grade No Class Grade 10 math A 10 english B 20 math B 20 english C student_info.csv No Name Address 10 student10 Address10 20 student20 Address20   No Class Grade Name Address 10 math A student10 Address10 10 english B student10 Address10 20 math B student20 Address20 20 english C student20 Address20
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recov... See more...
 PID hits an error & is not recovering & Add-On cannot start inputs as previous ones running. The problem is that when the process running the data input hits an error, it's not handling it or recovering.  The addon can't start the input again, because the input reports that the PID is already running. Add-on is also upgraded to version: Qualys Technology Add-on (TA) for Splunk | Splunkbase Version: 1.11.4 data inputs having issues: host detection, policy_posture_info Please suggest to fix the issue.   TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] ERROR: Another instance of policy_posture_info is already running with PID 2*****. I am exiting. TA-QualysCloudPlatform: 2024-05-15 01:10:52 PID=1889491 [MainThread] INFO: Earlier Running PID: 2*****        
Hi All, hope you are having a great day, I have a quick question. I have the data given as below, how do i extract just the first value if attribute newValue (in our Eg., its "None"), first value of ... See more...
Hi All, hope you are having a great day, I have a quick question. I have the data given as below, how do i extract just the first value if attribute newValue (in our Eg., its "None"), first value of newValue keeps changing so cannot be hard-coded. ```{}``` { targetResources: [       {         displayName: null        groupType: null        id: f61b1166        modifiedProperties: [           {             displayName: PasswordPolicies            newValue: ["None"]                                             // extract only this value            oldValue: ["DisablePasswordExpiration"]          }          {             displayName: Included Updated Properties            newValue: "PasswordPolicies"            oldValue: null          }          {             displayName: TargetId.UserType            newValue: "Member"            oldValue: null          }        ] }
I would like to create a table with the following conditions Can someone please tell me how to do this? 1. Log display is limited to MAX 20 rows. 2. Logs with more than 20 lines are not displayed ... See more...
I would like to create a table with the following conditions Can someone please tell me how to do this? 1. Log display is limited to MAX 20 rows. 2. Logs with more than 20 lines are not displayed (not even after the second page). 3. If the logs are less than 20 lines, adjust the size of the TABLE to fit the table of logs. *If I set [head 10] in the query, there will be an extra row left in the table if there is one log. I would like to eliminate this.
I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10 ... See more...
I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10   "foo" is not numeric, so it automatically fails, but I don't want the bins to be determined by an intrinsic order on foo anyway: The bins should respect the order that comes from the sort command. Thus, in the chart, the first "bar" should represent the greatest decile of the "sort -c" command, the second, the second decile, and so on. I can't figure out how to wrangle an order into the chart command or otherwise make it respect the sort and use that order as the basis for the separations into bins. Am I barking up the wrong tree here?
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially co... See more...
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially configured correctly at least.   I'm not getting logs from /var/log/kubernetes/audit/ nor from /var/log/audit/ as I believe I have configured in the below values file.   I am not getting logs from the containers that begin with `audit*` to any index, let alone what I'd expect from the transform processor statement here:   set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*")      The full values file is below, though I think the formatting looks better in the github gist    splunk-otel-collector: clusterName: ${env:CLUSTER_NAME} priorityClassName: "system-cluster-critical" splunkPlatform: # sets Splunk Platform as a destination. Use the /services/collector/event # endpoint for proper extraction of fields. endpoint: wheeeeee token: "fake-placeholder-token" index: "k8s" # should be able to replace with "" to dynamically set index as was done with SCK but this chart does not allow logsEnabled: true secret: create: false name: fake-credentials validateSecret: false logsCollection: containers: enabled: true excludePaths: - /var/log/containers/*fluent-bit* - /var/log/containers/*speaker* - /var/log/containers/*datadog* - /var/log/containers/*collectd* - /var/log/containers/*rook-ceph* - /var/log/containers/*bird* - /var/log/containers/*logdna* - /var/log/containers/*6c6f616462616c2d* - /var/log/containers/*lb-6c6f616462616c2d* # extraOperators: # - type: copy # # Copy the name of the namespace associated with the log record. # from: resource["k8s.namespace.name"] # # Copy to the index key, so the record will be ingested under the index named after the k8s namespace. # to: resource["com.splunk.index"] extraFileLogs: filelog/kube-audit: # sck logs go to audit-kube index, but got it in otel index for now. include: - /var/log/kubernetes/audit/kube-apiserver-audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-kube com.splunk.sourcetype: kube:apiserver-audit com.splunk.source: /var/log/kubernetes/audit/kube-apiserver-audit.log filelog/linux-audit: include: - /var/log/audit/audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-linux com.splunk.sourcetype: linux:audit com.splunk.source: /var/log/audit/audit.log # can't find these results for SCK yet extraAttributes: fromLabels: - key: k8s.pod.labels.cluster.name tag_name: cluster_name from: pod - key: k8s.namespace.labels.cluster.class tag_name: cluster_class from: namespace - key: k8s.namespace.labels.cluster.env from: namespace - key: k8s.node.name tag_name: host from: node agent: enabled: true config: processors: # add cluster metadata to each logged event # these are pulled in as environment variables due to a limitation # as helm is unable to use templating when specifying values. attributes/cluster_name_filter: actions: - key: cluster_name action: upsert value: ${env:CLUSTER_NAME} attributes/cluster_class_filter: actions: - key: cluster_class action: upsert value: ${env:CLUSTER_CLASS} attributes/cluster_env_filter: actions: - key: cluster_env action: upsert value: ${env:CLUSTER_ENV} transform/namespace_to_index: error_mode: ignore log_statements: - context: log statements: - set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*") - set(resource.attributes["com.splunk.index"], resource.attributes["k8s.namespace.name"]) # attributes/namespace_filter: # actions: # - key: com.splunk.index # action: upsert # value: k8s.namespace.name # - key: logindex # action: delete exporters: debug: verbosity: detailed service: pipelines: logs: processors: - memory_limiter - k8sattributes - filter/logs - batch - resourcedetection - resource - resource/logs - attributes/cluster_name_filter - attributes/cluster_class_filter - attributes/cluster_env_filter - transform/namespace_to_index # - attributes/namespace_filter receivers: kubeletstats: metric_groups: - node - pod - container filelog: include: - /var/log/pods/*/*/*.log - /var/log/kubernetes/audit/*.log - /var/log/audit/audit*.log start_at: beginning include_file_name: false include_file_path: true operators: # parse cri-o format - type: regex_parser id: parser-crio regex: '^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout_type: gotime layout: '2006-01-02T15:04:05.999999999Z07:00' # Parse CRI-Containerd format - type: regex_parser id: parser-containerd regex: '^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: copy from: resource["k8s.namespace.name"] to: resource["com.splunk.index"] # Set Environment Variables to be set on every Pod in the DaemonSet # Many of these are used as a work-around to include additional log metadata # from what is available in `.Values` but inaccessible due to limitations of # Helm. extraEnvs: - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_NAME - name: CLUSTER_CLASS valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_CLASS - name: CLUSTER_ENV valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_ENV # The container logs may actually be a series of symlinks. In order to read # them, all directories need to be accessible by the logging pods. We use # volumes and volume mounts to achieve that. extraVolumes: - name: containerdlogs hostPath: path: /var/lib/containerd/pod-logs - name: podlogs hostPath: path: /var/log/pods - name: varlogcontainers hostPath: path: /var/log/containers - name: kubeauditlogs hostPath: path: /var/log/kubernetes/audit - name: linuxauditlogs hostPath: path: /var/log/audit extraVolumeMounts: - name: containerdlogs mountPath: /var/lib/containerd/pod-logs readOnly: true - name: podlogs mountPath: /var/log/pods readOnly: true - name: varlogcontainers mountPath: /var/log/containers readOnly: true - name: kubeauditlogs mountPath: /var/log/kubernetes/audit readOnly: true - name: linuxauditlogs mountPath: /var/log/audit readOnly: true resources: limits: cpu: 1 memory: 4Gi requests: cpu: 1 memory: 1Gi    
Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "b... See more...
Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "black" backgrould (my preferred view). Is there a way to set the background for the Apps panel to dark or black?   Thanks, Eric W.
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads ... See more...
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads AND as search peers for our on-prem solution? Or would I need dedicated search peers? And would I be able to place the Search peers behind a NLB and point the on-prem distconf file to that NLB? Or would I have to hardcode the instances in the distconf file? 
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 cal... See more...
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 calls result of Eval1, then Eval3 calls results of Eval 2.... Here are some examples of our Eval fields EVAL-url_primaire_apache=if(match(url, "/"), mvindex(split(url, "/"), 0), url) ```if there is a (/) caracter, we only keep the first part before the first (/), if not, we use the full url field``` EVAL-url_primaire_apache_sans_ports=if(match(url_primaire_apache, ":"), mvindex(split(url_primaire_apache, ":"), 0), url_primaire_apache) ```We use the result from the previous Eval to extract only the first part before ":" or the full previous result``` Now the issue is that only the first field is generated. I think that might be fine since Evals are done in parallel. I tried to create an alias on the result of the first Eval and then call it in the second Eval like this: FIELDALIAS-url_primaire_apache_alias1=url_primaire_apache AS url_p_a EVAL-url_primaire_apache_sans_ports=if(match(url_p_a, ":"), mvindex(split(url_p_a, ":"), 0), url_p_a)   However, this still doesn't work since only the first Eval field is created. Neither the alias nor the second Eval are created. What am I missing? How can we create Eval fields recursively?