All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10 ... See more...
I have some non-time-based data that I'd like to summarize using chart with a small number of bins.  For example,   <some search> | stats count as c by foo | sort -c | chart sum(c) by foo bins=10   "foo" is not numeric, so it automatically fails, but I don't want the bins to be determined by an intrinsic order on foo anyway: The bins should respect the order that comes from the sort command. Thus, in the chart, the first "bar" should represent the greatest decile of the "sort -c" command, the second, the second decile, and so on. I can't figure out how to wrangle an order into the chart command or otherwise make it respect the sort and use that order as the basis for the separations into bins. Am I barking up the wrong tree here?
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially co... See more...
Hi folks, So I'm working to migrate from the old Splunk Connect for Kubernetes log collector to the new Splunk OTEL Collector. I  am getting the logs from pods, so I know that I have it partially configured correctly at least.   I'm not getting logs from /var/log/kubernetes/audit/ nor from /var/log/audit/ as I believe I have configured in the below values file.   I am not getting logs from the containers that begin with `audit*` to any index, let alone what I'd expect from the transform processor statement here:   set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*")      The full values file is below, though I think the formatting looks better in the github gist    splunk-otel-collector: clusterName: ${env:CLUSTER_NAME} priorityClassName: "system-cluster-critical" splunkPlatform: # sets Splunk Platform as a destination. Use the /services/collector/event # endpoint for proper extraction of fields. endpoint: wheeeeee token: "fake-placeholder-token" index: "k8s" # should be able to replace with "" to dynamically set index as was done with SCK but this chart does not allow logsEnabled: true secret: create: false name: fake-credentials validateSecret: false logsCollection: containers: enabled: true excludePaths: - /var/log/containers/*fluent-bit* - /var/log/containers/*speaker* - /var/log/containers/*datadog* - /var/log/containers/*collectd* - /var/log/containers/*rook-ceph* - /var/log/containers/*bird* - /var/log/containers/*logdna* - /var/log/containers/*6c6f616462616c2d* - /var/log/containers/*lb-6c6f616462616c2d* # extraOperators: # - type: copy # # Copy the name of the namespace associated with the log record. # from: resource["k8s.namespace.name"] # # Copy to the index key, so the record will be ingested under the index named after the k8s namespace. # to: resource["com.splunk.index"] extraFileLogs: filelog/kube-audit: # sck logs go to audit-kube index, but got it in otel index for now. include: - /var/log/kubernetes/audit/kube-apiserver-audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-kube com.splunk.sourcetype: kube:apiserver-audit com.splunk.source: /var/log/kubernetes/audit/kube-apiserver-audit.log filelog/linux-audit: include: - /var/log/audit/audit*.log start_at: beginning include_file_path: true include_file_name: false resource: host.name: resource["k8s.node.name"] com.splunk.index: audit-linux com.splunk.sourcetype: linux:audit com.splunk.source: /var/log/audit/audit.log # can't find these results for SCK yet extraAttributes: fromLabels: - key: k8s.pod.labels.cluster.name tag_name: cluster_name from: pod - key: k8s.namespace.labels.cluster.class tag_name: cluster_class from: namespace - key: k8s.namespace.labels.cluster.env from: namespace - key: k8s.node.name tag_name: host from: node agent: enabled: true config: processors: # add cluster metadata to each logged event # these are pulled in as environment variables due to a limitation # as helm is unable to use templating when specifying values. attributes/cluster_name_filter: actions: - key: cluster_name action: upsert value: ${env:CLUSTER_NAME} attributes/cluster_class_filter: actions: - key: cluster_class action: upsert value: ${env:CLUSTER_CLASS} attributes/cluster_env_filter: actions: - key: cluster_env action: upsert value: ${env:CLUSTER_ENV} transform/namespace_to_index: error_mode: ignore log_statements: - context: log statements: - set(resource.attributes["com.splunk.index"], Concat(["audit", resource.attributes["k8s.namespace.name"]], "-")) where IsMatch(resource.attributes["k8s.container.name"], "audit-.*") - set(resource.attributes["com.splunk.index"], resource.attributes["k8s.namespace.name"]) # attributes/namespace_filter: # actions: # - key: com.splunk.index # action: upsert # value: k8s.namespace.name # - key: logindex # action: delete exporters: debug: verbosity: detailed service: pipelines: logs: processors: - memory_limiter - k8sattributes - filter/logs - batch - resourcedetection - resource - resource/logs - attributes/cluster_name_filter - attributes/cluster_class_filter - attributes/cluster_env_filter - transform/namespace_to_index # - attributes/namespace_filter receivers: kubeletstats: metric_groups: - node - pod - container filelog: include: - /var/log/pods/*/*/*.log - /var/log/kubernetes/audit/*.log - /var/log/audit/audit*.log start_at: beginning include_file_name: false include_file_path: true operators: # parse cri-o format - type: regex_parser id: parser-crio regex: '^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout_type: gotime layout: '2006-01-02T15:04:05.999999999Z07:00' # Parse CRI-Containerd format - type: regex_parser id: parser-containerd regex: '^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: copy from: resource["k8s.namespace.name"] to: resource["com.splunk.index"] # Set Environment Variables to be set on every Pod in the DaemonSet # Many of these are used as a work-around to include additional log metadata # from what is available in `.Values` but inaccessible due to limitations of # Helm. extraEnvs: - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_NAME - name: CLUSTER_CLASS valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_CLASS - name: CLUSTER_ENV valueFrom: configMapKeyRef: name: cluster-info key: CLUSTER_ENV # The container logs may actually be a series of symlinks. In order to read # them, all directories need to be accessible by the logging pods. We use # volumes and volume mounts to achieve that. extraVolumes: - name: containerdlogs hostPath: path: /var/lib/containerd/pod-logs - name: podlogs hostPath: path: /var/log/pods - name: varlogcontainers hostPath: path: /var/log/containers - name: kubeauditlogs hostPath: path: /var/log/kubernetes/audit - name: linuxauditlogs hostPath: path: /var/log/audit extraVolumeMounts: - name: containerdlogs mountPath: /var/lib/containerd/pod-logs readOnly: true - name: podlogs mountPath: /var/log/pods readOnly: true - name: varlogcontainers mountPath: /var/log/containers readOnly: true - name: kubeauditlogs mountPath: /var/log/kubernetes/audit readOnly: true - name: linuxauditlogs mountPath: /var/log/audit readOnly: true resources: limits: cpu: 1 memory: 4Gi requests: cpu: 1 memory: 1Gi    
Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "b... See more...
Hello All, I have just upgraded Splunk ver 9.0.1 to 9.2.1. I have one question: the "Apps" panel on the left side of the window has a "white" background. Version 9.0.1 and older had a "dark" or "black" backgrould (my preferred view). Is there a way to set the background for the Apps panel to dark or black?   Thanks, Eric W.
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads ... See more...
Currently working on deploying Splunk on AWS to work in conjunction with our current on-prem solution and I have 2 questions. Can I configure our AWS Search heads to function as normal Search Heads AND as search peers for our on-prem solution? Or would I need dedicated search peers? And would I be able to place the Search peers behind a NLB and point the on-prem distconf file to that NLB? Or would I have to hardcode the instances in the distconf file? 
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 cal... See more...
Hi, We are using Splunk Cloud, so we can't access the conf files. In one of our custom source types, we need to create multiple new fields. Those fields are calculated recursevaly meaning Eval2 calls result of Eval1, then Eval3 calls results of Eval 2.... Here are some examples of our Eval fields EVAL-url_primaire_apache=if(match(url, "/"), mvindex(split(url, "/"), 0), url) ```if there is a (/) caracter, we only keep the first part before the first (/), if not, we use the full url field``` EVAL-url_primaire_apache_sans_ports=if(match(url_primaire_apache, ":"), mvindex(split(url_primaire_apache, ":"), 0), url_primaire_apache) ```We use the result from the previous Eval to extract only the first part before ":" or the full previous result``` Now the issue is that only the first field is generated. I think that might be fine since Evals are done in parallel. I tried to create an alias on the result of the first Eval and then call it in the second Eval like this: FIELDALIAS-url_primaire_apache_alias1=url_primaire_apache AS url_p_a EVAL-url_primaire_apache_sans_ports=if(match(url_p_a, ":"), mvindex(split(url_p_a, ":"), 0), url_p_a)   However, this still doesn't work since only the first Eval field is created. Neither the alias nor the second Eval are created. What am I missing? How can we create Eval fields recursively?  
Could we get some additional information on our Google chat splunk alert? For now I am only able to find  a way to put $name$ in the message text, but is there a way to add additional information... See more...
Could we get some additional information on our Google chat splunk alert? For now I am only able to find  a way to put $name$ in the message text, but is there a way to add additional information so we can display some of the search query details? like the sample below? Splunk Alert:  "Splunk Alert name" Status: <status code> Resource: <resource> logs: https://... Splunk results: https://...  
Does anyone have a thorough explanation of the certs in Splunk? And why they are all different yet the same? Can i use the same cert for all situations? Here's a table: https://docs.splunk.com/Docu... See more...
Does anyone have a thorough explanation of the certs in Splunk? And why they are all different yet the same? Can i use the same cert for all situations? Here's a table: https://docs.splunk.com/Documentation/Splunk/9.2.1/CommonCriteria/Commoncriteriainstallationandconfigurationoverview#List_of_certificates_and_keys   These tables aren't very specific, and splunk generated different certs for each one. I need to use company specific certs, and am a bit confused on which ones can be the same, and which ones can't...
Hi, I got the following error message when trying to connect to an eventhub, Error occurred while connecting to eventhub: CBS Token authentication failed. Status code: None Error: client-error CBS ... See more...
Hi, I got the following error message when trying to connect to an eventhub, Error occurred while connecting to eventhub: CBS Token authentication failed. Status code: None Error: client-error CBS Token authentication failed. Status code: None" Can someone help here? We have a HF in our network zone and want to connect to the MS eventhub via proxy. Which we configued within the app itself. We use the Add-on for MS Cloud Services version 5.2.2   Thanks
SAML authenticated users are unable to access either REPORTS or ALERTS from the search app @ ./app/search/reports or from the top level menu @ Settings/Searches, reports, alerts.  When they attempt t... See more...
SAML authenticated users are unable to access either REPORTS or ALERTS from the search app @ ./app/search/reports or from the top level menu @ Settings/Searches, reports, alerts.  When they attempt to access reports from the Search app, the page stalls at "Loading Reports".  When they attempt to filter on reports or alerts from "Settings/Searches, reports, alerts" a small icon appears at the bottom stating "server error".  The reports are listed, but none are accessible.  If the user is provided a URL to any report, everything works fine.  The ability to browse the list is what is broken.  Finally, if a user goes to "Settings/Searches, reports, alerts" and DOES NOT leaves "Type:All", everything works fine.  If the selection is changed to "Type:Reports" or "Type:Alerts" the error appears at the bottom Debug logs do not reveal anything obvious The permissions used for the SAML users is the default "power" role.  I tried moving test users to Admin role, no change.  Also, all local authenticated users in the same role work fine
I have a visualization of type splunk.table in Dashboard Studio (version 9.0.2). The source table contains columns "id" from which I have derived the column "link". sourcetype="x" | eval link = "ht... See more...
I have a visualization of type splunk.table in Dashboard Studio (version 9.0.2). The source table contains columns "id" from which I have derived the column "link". sourcetype="x" | eval link = "https://xyz.com/" + id | table id, link  I want the "link" column be visible as hyperlink (blue and underlined) in the dashboard, such that, each value of the column when clicked, opens the respective link in a new tab. I tried making below changes, not sure what am i doing wrong here:         "viz_jZKnPQQG": {             "type": "splunk.table",             "title": "x",             "dataSources": {                 "primary": "ds_GresBkrN"             },             "options": {                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableRowBackgroundColorsByTheme)"                 },                 "count": 8,                 "backgroundColor": "> themes.defaultBackgroundColor",                 "showRowNumbers": true,                 "fontSize": "small",                 "showInternalFields": false             },             "eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$row.link$",                         "newTab": true                     }                 }             ]         },      
Hi team,  I have created a Splunk dashboard using the below query where we are displaying a metric as per stack IDs [i.e, "mdq.sId"]. The dashboards are displayed with legends showing the IDs 54 and... See more...
Hi team,  I have created a Splunk dashboard using the below query where we are displaying a metric as per stack IDs [i.e, "mdq.sId"]. The dashboards are displayed with legends showing the IDs 54 and 10662. I want to display these IDs with a different name corresponding to the stack IDs on the legends.  For example, 54 is stack-ind and 10662 is stack-aus.     index="pcs-ing" ins="ingestion-worker" "metric.ingestion.api.import.time" "mdq.sId" IN ("54","10662") | timechart span=60m limit=0 count as ingestion_cycles by mdq.sId     is it possible to search by the stackID but display on legends using alias names? For example, in the above dashboard, I want '54'  to be shown as 'stack-ind' and '10664' as 'stack-aus'   
Hello,  I get Splunk Enterprise 6-month 10gb licenses., for free home use, as I use Splunk heavily at work, and try things in my home lab first. I was on vacation for some time, and let my license l... See more...
Hello,  I get Splunk Enterprise 6-month 10gb licenses., for free home use, as I use Splunk heavily at work, and try things in my home lab first. I was on vacation for some time, and let my license lapse. This caused multiple items to stop working, primarily the search feature. I added a new license this morning, but search is still restricted.  I have tried searching to request a reset license, calling to get a reset license, and submitting a support ticket for a reset license. Because I'm on a free account, nothing will allow me to actually request a reset license.  For free personal use enterprise licenses, can anyone share how to request a reset license so I can resume searching function?
I have defined a number.input field in Dashboard Studio (Version:9.0.2) so that the user can select a number representing a date (between 1-31). I want the date to be set to current day's date by def... See more...
I have defined a number.input field in Dashboard Studio (Version:9.0.2) so that the user can select a number representing a date (between 1-31). I want the date to be set to current day's date by default when the user opens the dashboard.. But "defaultValue": "$token_curr_date$" in the code below throws error  - Incorrect Type. Expected "number" {     "options": {         "defaultValue": "$token_curr_date$",         "token": "num_date",         "min": 1,         "max": 31     },     "title": "Select Date",     "type": "input.number" }   In "dataSources" I have defined below search and token:           "ds_current_date": {             "type": "ds.search",             "options": {                 "query": "| makeresults | eval token_curr_date=strftime(now(), \"%d\") | fields token_curr_date"             },             "token": "token_curr_date"         }   How do set the default value of input.number to current date? 
Hi All, I need you help.   I have trained few services and added the next_30m_avg_score in a Glass table but I don´t know how do I add dynamic color to the Score.   What modification do I do in t... See more...
Hi All, I need you help.   I have trained few services and added the next_30m_avg_score in a Glass table but I don´t know how do I add dynamic color to the Score.   What modification do I do in the source code to add the color   My source code is: `itsi_predict(40588288-a7ed-42b9-8dec-0c0379e058f9,health_score,app:itsi_predict_40588288_a7ed_42b9_8dec_0c0379e058f9_RandomForestRegressor_d1258935c9f0529f3d510eae_1713353848355)` | table next30m_avg_hs   This is how the Glass Table look:    Please suggest
I have 2 Index in Index Cluster Hot, Cold, Frozen  Hot and Cold are different disks Frozen will use same disk for both Index my question is: " The log will be replicated, Or Can I save just one I... See more...
I have 2 Index in Index Cluster Hot, Cold, Frozen  Hot and Cold are different disks Frozen will use same disk for both Index my question is: " The log will be replicated, Or Can I save just one Index into a Frozen and use it for backup Index Cluster?"
Hello Splunk Team, who we are? L Squared is a leading digital signage service provider, offering the Hub Content Management System (CMS). This platform empowers users to effortlessly manage and... See more...
Hello Splunk Team, who we are? L Squared is a leading digital signage service provider, offering the Hub Content Management System (CMS). This platform empowers users to effortlessly manage and display media content on digital signage screens. we want integrate Splunk powerful data analytics platform, into our ecosystem.   What we want? Integrating a read-only version of Splunk app's dashboards into L Squared Hub via an iframe. Implementing OAuth 2.0 authentication for secure access token generation or any other authentication method to get access token securely. Providing users with a list of Splunk apps and their respective dashboards for selection. How to do? To achieve these objectives, users will follow these steps: Initiate an OAuth 2.0 authentication request to Splunk for access token generation or utilize client credentials such as username, password, and secret key. Upon successful authorization, users gain access to Splunk REST API endpoints, including: Retrieving a list of installed Splunk apps using the following API call: E.g. "curl -k -u admin:password https://localhost:8089/services/apps/local?output_mode=json" Fetching a list of dashboards for a specific Splunk app via the following API call: e.g. curl -k -u admin:password https://localhost:8089/servicesNS/{username}/{app_name}/data/ui/views?output_mode=json&search=((isDashboard=1 AND (rootNode="dashboard") AND isVisible=1) AND ((eai:acl.sharing="user" AND eai:acl.owner="{username}") OR (eai:acl.sharing!="user"))) Finally, embed the selected Splunk app's dashboard read-only version onto L Squared Hub using an iframe. Who are our end users? This integration empowers organizations to seamlessly monitor and analyze their data through large displays. It enables teams to access up-to-date Splunk data conveniently, enhancing decision-making and operational efficiency. if you know right person or right way to get solution, please share with us ideas. Thanks in advance! @MuS @elizabethl_splu @richgalloway 
After i updated tha add-on to 6.3.x I am not able to create or update account setting under account type  Tenable.sc credentails (deprecated) I have tried version 6.3.2 and 6.3.6 both failed with e... See more...
After i updated tha add-on to 6.3.x I am not able to create or update account setting under account type  Tenable.sc credentails (deprecated) I have tried version 6.3.2 and 6.3.6 both failed with error "please enter valid address, username and password or configure valide proxy settings or verify ssl certificate" I am using credentials only and no proxy. Using version 6.1.0 of the add-on i can create/update account with the same info.
As almost all the video on youtube using splunk server on the same victim computer that have "Local windows network monitoring", the server on kali does not have it. And i don't know how to catch the... See more...
As almost all the video on youtube using splunk server on the same victim computer that have "Local windows network monitoring", the server on kali does not have it. And i don't know how to catch the event of the attack, although using TAwinfw Technology Addon for Windows Firewall. But when searching index="firewall", it return no results. Can someone help me, pls?
Hi,   I have to replace all the possible delimiters in the field with space so that I capture each word separately. Example: 5bb2a5-bb04-460e-a7bc-abb95d07a13_Setgfub.jpg I need to remove the exte... See more...
Hi,   I have to replace all the possible delimiters in the field with space so that I capture each word separately. Example: 5bb2a5-bb04-460e-a7bc-abb95d07a13_Setgfub.jpg I need to remove the extension as well it could be anything so .csv or .xslx or .do I need the output as below 5bb2a8d5 bb04 460e a7bc bb995d07a13 Setgfub   I came up with expression which works fine but i need this either in regular expression or eval expression as I am using it for data model.     | makeresults | eval test="ton-o-mete_r v4.pdf" | rex field=test mode=sed "s/\-|\_|\.|\(|\)|\,|\;/ /g" | eval temp=split('test'," "      
Hi all My Splunk model is configured behind a proxy to access the Internet. The proxy will allow access to the specified URL. I want to use "Find More Apps" to download Apps directly without having ... See more...
Hi all My Splunk model is configured behind a proxy to access the Internet. The proxy will allow access to the specified URL. I want to use "Find More Apps" to download Apps directly without having to download and upload SPL files. Which URL do I need to open the rule to? Thanks