All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you hav... See more...
You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you have, it's impossible to know what's going on here. These 3 lines are most likely the source of your problem | mvexpand x | mvexpand y | dedup x  
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $... See more...
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level WARN To make the change persistent: * Create or edit $SPLUNK_HOME/etc/log-local.cfg * Add: category.TcpInputProc=WARN * Followed by splunkd restart.
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data t... See more...
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data to send agent with gateway. Can anyone guide me how to solve this issue
Hi @neilgalloway does it give any error when you save the identity? Would you please share a screenshot of the error you are receiving when trying to save the connection using that identity?
Hi @SureshkumarD would it be possible to provide some sample data to go with the search?
Hi @pgabo66 , you have to create a new field associating it to your sourcetype and using this rule: ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) in event.url in the field extractio... See more...
Hi @pgabo66 , you have to create a new field associating it to your sourcetype and using this rule: ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) in event.url in the field extraction. Ciao. Giuseppe
Hi @mahesh27  As @bowesmana said, this is a classic proving the negative issue and you can find thousands of answers in Community. In this case you have two solutions: if you have a list of hosts ... See more...
Hi @mahesh27  As @bowesmana said, this is a classic proving the negative issue and you can find thousands of answers in Community. In this case you have two solutions: if you have a list of hosts to monitor to put in a lookup (called e.g. perimeter.csv with at least one column called host), you could run something like this | tstats count WHERE index=app-logs sourcetype=app-data source=*app.logs* host IN (appdatajs01, appdatajs02, appdatajs03, appdatajs04) BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total by host | where total<100 If you don't have a lookup or you don't want to manage it, you could run something like this: | tstats count latest(_time) AS _time WHERE index=app-logs sourcetype=app-data source=*app.logs* host IN (appdatajs01, appdatajs02, appdatajs03, appdatajs04) earliest=-30d@d latest=now BY host | where _time<now()-3600 In this way, you have the hosts that sent logs in the last 30 days but not in the last hour (you eventually can modify the time periods). in addition the command | bin span=1m _time has no sense because you don't use time in your stats. Ciao. Giuseppe
I understand look and feel.  You can search https://ideas.splunk.com/ to see if someone is asking for this parity; if not, you can submit an idea for it.
@Cansel.OZCAN  I have analytics, but it is still not showing. Can you please tell me the query for getting the top 10 sessions by weight widget? 
This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first. | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl ... See more...
This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first. | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl | eval pod_name = mvindex(split(pod_name, "-"), 0) | stats values(pod_name) as pod_name] | stats dc(pod_name) as count values(pod_name) as pod_name by importance Your mock data will give you something like pod_name importance podc critical   Now, my interpretations of your use case.  First, I think your lookup table actually look like this, with pod_name as column name instead of pod_name_lookup.  Is this correct? pod_name importance poda non-critical podb critical podc critical I call the lookup name "pod_name_lookup".  Second, I interpret the "pod_name" column in the lookup table, mocked up as "poda", "podb", "podc", to be the first part of running pod names (mocked up as "poda-284489-cs834" and "podb-834hgv8-cn28s") that does not contain a dash.  If this is not how the two names match, you will need to either make the transformation, or come up with more accurate mockups. Now, I am assuming that 'importance" in lookup and events match exactly.  If you want to detect the discrepancies in "importance" as well, the search will be more complicated.
Hi @sphiwee , if you don't know very well Powershell, why do you want to use it? You can use a simple batch script or some other tool (as Ansible) or Windows GPO (surely you have a Domain Controlle... See more...
Hi @sphiwee , if you don't know very well Powershell, why do you want to use it? You can use a simple batch script or some other tool (as Ansible) or Windows GPO (surely you have a Domain Controller. Anyway, you could see this link https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller?_gl=1*1w7f8cx*_ga*ODAyODQ0Njg5LjE3MTIyMjE5NTg.*_ga_GS7YF8S63Y*MTcxMzI0MzQxMS40My4xLjE3MTMyNDM3MjIuNDguMC4w*_ga_5EPM2P39FV*MTcxMzI0MzM5Ni40Ny4xLjE3MTMyNDM4NzAuMC4wLjkyMTAxMDI3NQ..&_ga=2.74864917.64127089.1712221958-802844689.1712221958#Install_a_Windows_universal_forwarder_from_the_command_line for detailed instructions or the solution from this Community Champio: https://community.splunk.com/t5/Getting-Data-In/Powershell-unattended-installation/m-p/81069 Ciao. Giuseppe
I'm sure someone here has worked on a powershell script to install splunk to different windows hosts remotely. Can I get help with that? my powershell skills are really weak.
Start with this index=abc sourcetype=kubectl | stats count by pod_name, status, importance | rex field=pod_name "^(?<pod_name_lookup>pod.)" | inputlookup append=t pod_lookup.csv | fillnull value=0 ... See more...
Start with this index=abc sourcetype=kubectl | stats count by pod_name, status, importance | rex field=pod_name "^(?<pod_name_lookup>pod.)" | inputlookup append=t pod_lookup.csv | fillnull value=0 count | stats max(count) as count values(status) as status values(pod_name) as pod_name by pod_name_lookup, importance this gets the data pods seen, creates the lookup name with rex, then appends the control lookup to the end, filling the count/status with 0 and something suitable. The second stats just join them together on the lookup pod name and criticality. If count is 0 then you do not have any pods of that variant. The pod_name and status fields will give you all the values seen for the pod_name in the data - use them if you need, otherwise remove them. So, you can do  | where count=0 to get the missing pods
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see not... See more...
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see notes and how the older case was solved.  2. by enabling “logging” for a playbook, where opt logs are stored to access later on (beside vie debugging in the UI..)   thank you in advance!
Hi @fishn  To match the partial string in the lookup (e.g. poda) with the data (e.g. "poda-284489-cs834"), you need to append each of the pod_name_lookup values with a wildcard asterisk, i.e. poda*,... See more...
Hi @fishn  To match the partial string in the lookup (e.g. poda) with the data (e.g. "poda-284489-cs834"), you need to append each of the pod_name_lookup values with a wildcard asterisk, i.e. poda*, podb*, podc* Then, add a lookup definition with the following setting, under the Advanced options checkbox: Then in your search: (where lkp_pod_name is your lookup definition)   | lookup lkp_pod_name pod_name_lookup as pod_name    --- Next, to show which pods are missing and their importance, you can do it like this: index=abc sourcetype=kubectl | eval Observed=1 | append [| inputlookup lkp_pod_name | eval Observed=0 ] | lookup lkp_pod_name pod_name_lookup as pod_name OUTPUT pod_name_lookup | stats max(Observed) as Observed by pod_name_lookup, importance | where Observed=0  --- Finally, to count how many critical and non-critical pods are not found as well as table the list of missing pods, you can append this line to the above search: | eventstats count as count_by_importance by importance  
Hi Just curious - what are the security implications to think about enabling this. If used in conjunction with the trusted domain list in web-features.conf - we should be secure? Or is there someth... See more...
Hi Just curious - what are the security implications to think about enabling this. If used in conjunction with the trusted domain list in web-features.conf - we should be secure? Or is there something else?
In your first attempt it should have been like this index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | stats count b... See more...
In your first attempt it should have been like this index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | stats count by host | inputlookup append=t host_lookup.csv | fillnull count value=0 | stats max(count) as count by host | where count<100  but if you want to search at 1 minute granularity, what if one minute is > 100 and 1 is < 100   
Thanks for your practical answer, this is not what I asked for but really what I need. Appreciate it very much!
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: ... See more...
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: Automatic SSL enablement is not permitted on the deployer". Splunk support have recommened to change the setting on web.conf to "splunkdConnectionTimeout = 3000", which I added to the system file and restarted the splunkd. Unforutnately this timeout setting does not help fix this "known issue". I have selected Enable SSL option in the Post Config Process as I know that SSL is enabled in both the Deployer and SH web configs. If anyone has a work around for this or can suggest how I can enable SSL after the post configuration of Splunk ES on both the SH and Deployer, it would be appreciated.   Thanks
Hi @vm_molson is the search for the lookup file within the same dashboard, or some other dashboard linked from a drilldown?  If it's within the same dashboard, you can simply add something like this... See more...
Hi @vm_molson is the search for the lookup file within the same dashboard, or some other dashboard linked from a drilldown?  If it's within the same dashboard, you can simply add something like this to the search for the lookup: | search date&lt;=$global_time.latest$ date&gt;=$global_time.earliest$ However if you want to link to a different search, you might need to go down this route (link) where you would add the variables directly in the URL parameters of the link the user would click on.