All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it shoul... See more...
Hi Splunk experts, I am looking to display status as Green/Red in Splunk dashboard  after comparing the values of Up & Configured in the below screenshot of log entries.  If both are equal it should be green else Red. Can anyone please guide me how to achieve that.          
Hi @psomeshwar , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability... See more...
Hello Team, Can anyone please help me out to clarify the following query and a better approach for deploying the Observability solution? I have an Application which is deployed as High Availability Solution, as in it acts as Primary/Secondary, so the application runs on either of the node at a time. Now we are integrating our application with Splunk Enterprise for Observability. As part of the solution, we are deploying Splunk Otel Collector + FluentD agent to collect the metrics/logs/traces. Now how do we manage the integration solution, as in if the Application is running on HOST A, I need to make sure both these agents (Splunk Otel Collector + FluentD) to be up and running on HOST A to collect & ingest data into Splunk Enterprise, and the agents on the other HOST B, needs to be IDLE so that we don't ingest data into Splunk. This can be achieved my deploying custom script (to be executed under Cron frequently say 5 mins to check where the Application is Active and start the agent services accordingly). But how do we make sure the data that are ingested into Splunk are appropriate (without any duplicates) when handling this scenario because there are 2 different hosts? We also would like to avoid a drop down in the Dashboard to select appropriate HOST to filter the data based on the HOST? Because this procedure makes hard for the business team to understand where the application is running currently and select the HOST accordingly? so this approach does not make great sense to me. Is there a better approach to handle this situation? In case if we are having Load Balancer for the Application, Are we able to make use of it to tell Splunk otel collector + Fluentd to collect data only from active Host and then send the data through HTTP Event Collector.
Hello @gcusello  I managed to get it to work. The solution I used was: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename cid as cid1 | rename jsonevent.cid a... See more...
Hello @gcusello  I managed to get it to work. The solution I used was: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename cid as cid1 | rename jsonevent.cid as cid2 | eval jcid = coalesce(cid1, cid2) | stats stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY jcid Thanks, this thread helped me a lot
Hi @psomeshwar , please try this: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | eval cid=coalesce(cid,jsonevent.cid) | stats values(ApplicationName) AS Applic... See more...
Hi @psomeshwar , please try this: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | eval cid=coalesce(cid,jsonevent.cid) | stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY cid Ciao. Giuseppe
Hello @gcusello  That is exactly what I did, the field name for cid in index1 is "cid" and the field name for cid in index2 is "jsonevent.cid" When I used the rename command, I only got the results ... See more...
Hello @gcusello  That is exactly what I did, the field name for cid in index1 is "cid" and the field name for cid in index2 is "jsonevent.cid" When I used the rename command, I only got the results from index2 and when I did not use the rename command, I only got the results from index1
Hi @Hassaan.Javaid, Please check out this existing post and let me know if it helps: https://community.appdynamics.com/t5/Infrastructure-Server-Network/Kubernetes-cluster-agent-can-not-connect-404/... See more...
Hi @Hassaan.Javaid, Please check out this existing post and let me know if it helps: https://community.appdynamics.com/t5/Infrastructure-Server-Network/Kubernetes-cluster-agent-can-not-connect-404/m-p/43062
Hi @Dalia.Alaa, You can find out more about contacting support here: How do I submit a Support ticket? An FAQ  I can tell based on your email that your company is a contractual customer, but your... See more...
Hi @Dalia.Alaa, You can find out more about contacting support here: How do I submit a Support ticket? An FAQ  I can tell based on your email that your company is a contractual customer, but your Account shows you as a trial user; who does not have access to contact Support.  Is there a reason you are not part of the main account for your company?
Hi @Orange_girl., what do you mean with "splunk searches uses .csv file", are you using a lookup or an inputcsv? Ciao. Giuseppe
Hi @Satish.Babu,  Were you able to look into what @Sunil.Agarwal mentioned above? 
Hi @psomeshwar, what are the exact fieldnames of cid in both the indexes? if they are cid and jsonevent.cid (it's a supposition, please confirm that), please try again the above solution, using the... See more...
Hi @psomeshwar, what are the exact fieldnames of cid in both the indexes? if they are cid and jsonevent.cid (it's a supposition, please confirm that), please try again the above solution, using the correct field name in the rename command.   Ciao. Giuseppe
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file locatio... See more...
Hello, one of my splunk searches uses .csv file. I’m trying to find where the .csv is located within splunk and I can’t find it. Is there any command that I can put in splunk to find the file location please?
Hello @gcusello  I tried that and it didn't work. Let me show how each search works: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | stats values(ApplicationName)... See more...
Hello @gcusello  I tried that and it didn't work. Let me show how each search works: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY cid Result: cid                     ApplicationName    ApplicationVersion   ApplicationVendor   hostname   username 743fsd234     AppName                   AppVersion                  AppVendor                  null                 null (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename jsonevent.cid AS cid | stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY cid Result: cid                     ApplicationName    ApplicationVersion   ApplicationVendor   hostname   username 743fsd234     null                                null                                   null                                   hostname   username
https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#Event_Log_allow_list_and_deny_list_formats * $XmlRegex: Use this key for filtering when you render Windows Event log events... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#Event_Log_allow_list_and_deny_list_formats * $XmlRegex: Use this key for filtering when you render Windows Event log events in XML by setting the 'renderXml' setting to "true". Search the online documentation for "Filter data in XML format with the XmlRegex key" for details. Also remember that transforms are not (typically) run on UFs. So your setnull transform is _not_ run if defined on the UF.
Hi @psomeshwar , rename it to have the same field name: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename jsonevent.cid AS cid | stats values(ApplicationNa... See more...
Hi @psomeshwar , rename it to have the same field name: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename jsonevent.cid AS cid | stats values(ApplicationName) AS ApplicationName values(ApplicationVersion) AS ApplicationVersion values(ApplicationVendor) AS ApplicationVendor values(hostname) AS hostname values(username) AS username BY cid  Ciao. Giuseppe
First and foremost - look into your _internal index for errors. There you should find some indication as to why the connections downstream don't work. But your hunch about TLS inspection may be righ... See more...
First and foremost - look into your _internal index for errors. There you should find some indication as to why the connections downstream don't work. But your hunch about TLS inspection may be right. If your SSL visibility solution creates certificates with a CA your HF doesn't know - it will not connect to the receivers in the cloud because the connections are not trusted (the authenticity of the certificate cannot be verified by any known CA certificate). If this is the case you have two possible solutions. 1) Create an exception in your TLS inspection policy (which makes sense in a typical use case since you typically don't need and don't want to inspect the Splunk traffic - there isn't much to be inspected there) 2) Deploy your organization's RootCA to the HF so that the cert created by your TLS inspection solution is deemed trusted. I'd probably push for the former solution but YMMV.
Yes. That is how I'd interpret the inputs.conf spec as well. I can understand though why would just one value be effective (it's after all just one input bound to one port and the data is just inter... See more...
Yes. That is how I'd interpret the inputs.conf spec as well. I can understand though why would just one value be effective (it's after all just one input bound to one port and the data is just internally split between various tokens) but the docs are ambiguous on this one to say the least.  
Hello, Thanks, this does help a little, however, there is one problem. One of the indexes has their events in a json format, and the cid is formatted as jsonevent.cid. As a result, I am only getting... See more...
Hello, Thanks, this does help a little, however, there is one problem. One of the indexes has their events in a json format, and the cid is formatted as jsonevent.cid. As a result, I am only getting one side of the events, and the other is blank. Is there a way to work aroudn this
Also dedup is a tricky command. It returns just first occurrence of the event with given deduped field(s) _in search order_ (which doesn't have to be what you need).
OK. Is L: drive a local device or a network path mounted locally? (that's not clear from your description).