All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I hope all is well.   I have struggled with Data Model Concept as I seek to know why and When we use the data model and how it increases the performance? I am fine with it's structured data a... See more...
Hi, I hope all is well.   I have struggled with Data Model Concept as I seek to know why and When we use the data model and how it increases the performance? I am fine with it's structured data and has three type of data sets, also I am able to create it as How To.   But why use it? When use it?  what is the main idea behind it?
Is it possible to reconfigure Splunk to use _indextime instead of _time for data retention policy?
Why does the Approval settings work in some actions and not in others?
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results some... See more...
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results sometimes are amazing. Anyway I still feel not comfortable and experience a lot of problems with syntax and rules of splunk language .  I am thinking about a page/tutorial/blog/youtube channel , something like Splunk for DBA - relational DBA! . To read about theory , rules and syntax commands with examples  like stats, join, append, timecharts and other who can manipulate with multiple table indexes their relations and aggregations. Of course this community is a mine of examples and recipes but maybe there is a place where such topics are described and explained in more affordable structured way.  any ideas , hints K.  
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine ... See more...
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine in English while keeping the machine's language set to French ?
Hi,   how to add "read more" link options for a table field values if its more than 50 characters in a splunk classic dashboard?
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query m... See more...
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query my custom query starts running and giving me incomplete data and i have only mentioned filename attribute in the commands.conf file of my custom  command will this be the reason  
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentr... See more...
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentrator and are relative to many different data flows (linux, oracle, etc...), so I have to assign the correct sourcetype to these data and I have to elaborate logs because they are modified by securelog: the original logs are inserted in a field of json adding some metadata. I configured the following flow: in props.conf: [source::http:logstash*] TRANSFORMS-000 = global_set_metadata TRANSFORMS-001 = set_sourcetype_by_regex TRANSFORMS-001 = set_index_by_sourcetype in transforms.conf: [global_set_metadata] INGEST_EVAL = host := coalesce(json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")), relay_hostname := json_extract(_raw, "hub"), source := "http:logstash".coalesce("::".json_extract(_raw, "log.file.path"), "") [set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case(searchmatch("/var/log/audit/audit.log"), "linux_audit", true(), "logstash") [set_index_by_sourcetype] INGEST_EVAL = index:=case(sourcetype=linux, "index_linux", sourcetype=logstash, "index_logstash") in which: the first transformation extract (using INGEST_EVAL) metadata as host, source and relay_hostname (the concentrator from which the logs arrive), the second one assign the correct sourcetype based on a regex. the third one assign the correct index based on sourcetype and usig INGEST_EVAL to avoid to re-run a regex, the first two transformations are correctly executed, but the third doesn't use the sourcetype assigned by the second one. I also tried a different approach using CLONE_SOURCETYPE in the second one (instead of INGEST_EVAL) and it runs, but I'm verifying if the above flow can run because it's more linear and should be less heavy for the system. Where could I search the issue? is there something wrong in the activity flow? Thank you to all. Ciao. Giuseppe
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lump... See more...
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lumped together, but severity 4 and 5 need to be charted separately. The coalesce command is close but in my case the key is the same, it's the value that changes.  None of the mv commands look like they do quite what I need, nor does nomv.   The workaround I've considered doing is an eval command with an if statement to say if the severity is 1, 2, or 3, set a new field value to 3, then chart off of this new field.  It feels janky, but I think it would give me what I want. Is it possible to do what I want in a more elegant manner?
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running pro... See more...
Hi everybody, I'm trying to monitor a demo web app deployed with Kubernetes, but even following the documentation I end up short in that pursue, the web app consiste in 4 containers, all running properly on a Ubuntu Server 24.04, using MicroK8s and kubectl. I followed this guide in the documentation: https://docs.appdynamics.com/appd/24.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli Everything was clear for the seven steps, but I have to point it out that in the step 6, the YML example is showing the port 8080 for the controllerUrl, and since I have the SaaS version, I changed it to 443. When I validate the installation noticed that the cluster agent was not registered, so I started following the troubleshooting docs, when I retrieve the logs for the namespace with the operator and the cluster-agent I checked the following error: [ERROR]: 2024-07-03 15:52:57 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory I don't know why is searching for that specific path and in which moment should I create or where to find that api-user, that was not in the docs. So I'll be really thankful if someone could help me with this issue. Hope everybody have a nice day. Regards
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to... See more...
I'm looking to get all failed event log based on a field , and then trying to find the success event log for the same field, so that i have a net failed even log to report.  Basically, i'm trying to an alert that i can run every X hours , looking back X hours, and finds me ONLY the failed logs that havent succeeded yet . The failures are retried at regular intervals. In short, get all failed events and weed out the succeeded ones on later retries , so i only have the ones that is still failed.  Trying something like below but i think there should be better way than this..  index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" errorCd="*-701" status=FAILED |where jobNumber NOT IN [search index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" status=SUCCESS ] |table _time accountNumber jobNumber letterId errorCd |sort _time TIA!!
hello all, Is there any method to decrypt the identity password in the Splunk DB_connect App? we are using Splunk DB Connect version 3.11.1      
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when t... See more...
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when the dashboard loads and only displays results when the viewer clicks on a username from panel B. I am attempting to initialize the dashboard with a set of default results based on the currently logged in user. If I pull up the dashboard I would like the results in panel A to default to my username. The problem I am running into is that when I set the initial value it accepts it as plain text instead of the token value.   <init> <set token="usr.name">$env:user$</set> </init> <row> <html> $usr.name$ </html> </row>   This code displays the $usr.name$ token as "$env:user$" instead of my username. My eventual goal is to have panel A display results for myself when I first access the dashboard and for another user if I click on their name in the results of panel B. I have been coming up empty on google so am reaching out here to see if anyone has any ideas.
Hello, I created a Splunk app with 4 dashboards and i configured the app navigation bar to only show those 4 dashboards.   <nav> <view name="dashboard_1"/> <view name="dashboard_2"/> <view n... See more...
Hello, I created a Splunk app with 4 dashboards and i configured the app navigation bar to only show those 4 dashboards.   <nav> <view name="dashboard_1"/> <view name="dashboard_2"/> <view name="dashboard_3"/> <view name="dashboard_4"/> </nav>   However, the user permissions for each one of them are different depending on the user's role. There are users that can only see DASHBOARD_1, others can only see DASHBOARD_2, some can see both DASHBOARD_2 and DASHBOARD_3, among a series of other combinations... My problem is with the users that can only view dashboards 2, 3 and 4, because even though they explicitly DO NOT have permissions to view DASHBOARD_1, when they enter the App, Splunk always tries to open that one. As you can imagine the result is an error page with the horse and the "Oops.".  I understand that because it is the first one in the navigation bar, Splunk assumes it's the homepage of that app. But I also expected Splunk to take the user permissions into consideration. Meaning that if a certain user only has permissions to see dashboards 3 and 4, the App navigation bar should only show those 2 options and when they open the App, it should open on DASHBOARD_3. This was working great a few months back and at some point it just stopped - I can't be precise on when exactly that happened . I managed to find a workaround by replacing all the <view> entries shown above with:    <nav> <view source="all" match="dashboard_1"/> <view source="all" match="dashboard_2"/> <view source="all" match="dashboard_3"/> <view source="all" match="dashboard_4"/> </nav>    However, a few days ago Splunk was updated from version 9.0.2 to 9.2.1 and the workaround stopped working as well.   I'm sure I'm missing something. What can I do so that Splunk obeys the dashboards user permissions in this situation and doesn't redirect the user to a dashboard they don't even have permission to view?   Thank you.
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
We get data in using HEC tokens, and the data is flowing just fine. But when we try to view the HTTP Event Collector panel under Indexing > Inputs, it says we have no tokens configured. How do we con... See more...
We get data in using HEC tokens, and the data is flowing just fine. But when we try to view the HTTP Event Collector panel under Indexing > Inputs, it says we have no tokens configured. How do we configure the MC to see the existing tokens?
Good Morning; I am requesting a link to download a previous version of Splunk Forwarder.  Requesting Windows x64 7.2.6 I'm trying to repair the installation but it is requesting the MSI to complet... See more...
Good Morning; I am requesting a link to download a previous version of Splunk Forwarder.  Requesting Windows x64 7.2.6 I'm trying to repair the installation but it is requesting the MSI to complete. Thank you
Hello, I would like to merge 2 index clusters. Context 2 indexer clusters 1 search head cluster Objectives Add new indexers to cluster B. Move data from cluster A to cluster B. Remove clus... See more...
Hello, I would like to merge 2 index clusters. Context 2 indexer clusters 1 search head cluster Objectives Add new indexers to cluster B. Move data from cluster A to cluster B. Remove cluster A. Constraint Keep service interruptions to a minimum. What do you think of this process: Before starting Make sure the clusters have same Splunk version. Make sure the clusters have same configuration. Make sure volumes B can absorb indexes A. Make sur common indexes have the same configuration. If not, define their final configuration. Add new peer nodes Install new peer nodes. Add new peer nodes in cluster B. Rebalance data. Add new peer nodes in outputs.conf and restart. Move data Remove peer nodes A from outputs.conf and restart. Move indexes configuration from A to B. Copy peer apps from A to B. Put peer nodes A in manual detention to stop replication from other peer nodes. Add peer nodes A in cluster B. Remove peers node A One indexer at a time: Remove peer node A from cluster B. Wait for all the fixup tasks to complete to get the cluster meet search and factors. Rebalance data. Finally Make sure there is no major issue in the logs. Update diagram and inventory files (spreadsheets, inventory files, lookups, etc.). Update dashboards and reports if necessary.
Can someone help me understand what I am doing wrong here?   My requirement is I have a index=prod_syslogfarm which will report on the devices forwarding logs to the syslog collectors.  The devices... See more...
Can someone help me understand what I am doing wrong here?   My requirement is I have a index=prod_syslogfarm which will report on the devices forwarding logs to the syslog collectors.  The devices may report with either hostname / IP address / fqdn.  Now, I have to compare this with our master asset inventory (which is the lookup below myinventory.csv) and create a report with the host names that are not seen in prod_syslogfarm index.  I am making hostname as common field for the main search and the lookup file and below is my query.    Below query is not working as the report contains the hostnames that are there in the syslogfarm index. index=prod_syslogfarm | stats count by IP_Address | lookup myinventory.csv IP_Address OUTPUT Hostname | table IP_Address Hostname | rename Hostname as Reporting_Host | appendcols [ search index=prod_syslogfarm | eval fqdn_hostname=lower(fqdn_hostname) | eval Reporting_Host=lower(Reporting_Host) | eval Reporting_Host=mvappend(Reporting_Host, fqdn_hostname) ] | dedup Reporting_Host | table Reporting_Host | rename Reporting_Host as Hostname | appendcols [inputlookup myinventory.csv | eval Hostname=lower(Hostname) | stats values(Hostname) as cmdb_hostname by Hostname ] | eval missingname = mvmap(cmdb_hostname, if(cmdb_hostname != Hostname, cmdb_hostname, null())) | table missingname | mvexpand missingname | lookup myinventory.csv Hostname as missingname OUTPUT Environment Tier3 Operating_System | table missingname Environment Tier3 Operating_System