All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is lik... See more...
I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is like this: Server Time Variable1 Variable2   and I need it to look like this: Server Time Variable1 Variable2 Variable3 Variable4   My current search is like this: index=index sourcetype=sourcetype | stats count by Server Time Variable3 Variable4 | fields- count | lookup mylookup Server Time OUTPUT Variable1 Variable2 | outputlookup mylookup The problem I'm running into is that the search gets caught on that lookup command for 2+ hours, and I'm not sure why it's taking so long to match that data.  Does anyone have any insight on why that is occurring or how I can restructure my search to accomplish my need in a more efficient way? Or would it be better to try updating the kv store via restAPI from the same script that is generating variable3 and variable4?
@phanTom , thank you. it is a bit difficult to keep track all the ID's, but I learned there is a rest query to get the name of the playbook from it's ID.  
Sure. That's what stats first/last/earliest/latest/index_earliest/index_latest are for. But: 1) Aren't you trying to do in Splunk something it's not supposed to be? (like a database table) 2) Why ... See more...
Sure. That's what stats first/last/earliest/latest/index_earliest/index_latest are for. But: 1) Aren't you trying to do in Splunk something it's not supposed to be? (like a database table) 2) Why not use a lookup instead of ingesting events?
Below is my CSV  In this table when fist identify the Flow in our app we will update csv file with _key, App_name Date_find , Risk, and Status. when update happen the I will upload or ingest  the ... See more...
Below is my CSV  In this table when fist identify the Flow in our app we will update csv file with _key, App_name Date_find , Risk, and Status. when update happen the I will upload or ingest  the csv file into Splunk. almost real time. this csv we are keeping it as lookup outside Splunk. So nothing get deleted. when I ingest or upload all the pervious  entry get ingest in Splunk. only different is timestamp time at the ingestion. so all the entry such as _key 1 ,2, so get same timestamp.  I want to know if it possible to return the latest result only. so I will have all the data and not any duplicate. otherwise I need to find the different solution. Same thing happen when flow get fix Remediate_date, Risk_Afterremediate, and status get updated. file get ingested into Splunk.  Thank you in advance. _key  App_name Date_find Status Risk Remediate_date Risk_After remediate Status 1 App1 12/04/2022 Open Critical 12/10/2022 Sustainable Closed 2 App2 01/26/2023 Open Moderate 02/12/2023 Sustainable Close                
Hey @LearningGuy, This currently is a limitation for the Dashboard studio. The inputs will always stay on top of the dashboards only.  
Above Link (https://answers.splunk.com/answers/52850/plotting-text-values-on-y-axis.html) does not work for me. Has this question ever been answered? I am also looking for a way to show words (enu... See more...
Above Link (https://answers.splunk.com/answers/52850/plotting-text-values-on-y-axis.html) does not work for me. Has this question ever been answered? I am also looking for a way to show words (enumerations) on the Y-axis ticks. Such as state descriptors "-1=unknown, 0=off, 1=reduced_mode, 2=on" etc.
Hi @PReynoldsBitsIO , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score C... See more...
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score CompanyA ip1 Vuln1 2 CompanyA ip1 Vuln2 0 CompanyA ip2 Vuln3 4 CompanyA ip2 Vuln4 2 CompanyA ip3 Vuln5 3 CompanyA ip3 Vuln6 5 Group by IP  => This worked just fine | stats values(company), avg(Score) as AvgScore by ip company ip AvgScore CompanyA ip1 1 CompanyA ip2 3 CompanyA ip3 4 Group by Company   =>  how do I group by company after group by ip (using stats) and put it on a separate table? | stats avg(AvgScore) as Average, avgAvgScore) as Median, max( AvgScore) as Max by company Company Average Median Max CompanyA 2.7 3 4
@meshorer whenever you update a playbook it will save with a different id to enable version control.  Is something about it changing the id causing you some issues in automation (or other places)?
hi, I see that playbooks ID keep changing all the time. can anyone explain the reasons to it?     thank you,   Daniel    
Hello, I have installed the Add on for Microsoft Azure. How can i get data in from Azure Service Bus?
You got that fixed?
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in t... See more...
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in the web ui. I have asked user which I have mapped the roles to login (LDAP authentication) and they are able to login and search. There are no existing local account for the user.  Running Splunk Enterprise v9.0.6.  Appreciate if anyone can help with this.  Thanks.
This query will tell you when each user last logged in.  It's up to you to decide which of them is "inactive". | rest /services/authentication/users splunk_server=local | table title last_successful... See more...
This query will tell you when each user last logged in.  It's up to you to decide which of them is "inactive". | rest /services/authentication/users splunk_server=local | table title last_successful_login
I wound up coming up with a solution.  Any spaces at the start of the field will be truncated when Splunk builds that chart.  I made a sort_order field that adds spaces to the start of the field valu... See more...
I wound up coming up with a solution.  Any spaces at the start of the field will be truncated when Splunk builds that chart.  I made a sort_order field that adds spaces to the start of the field value.  The more spaces, the earlier in the chart order the field is placed. Here's the code now:   <Base Search> | eval sort_order=case( income=="$24,000 and under"," $24,000 and under", income=="$25,000 - $39,999"," $25,000 - $39,999", income=="$40,000 - $79,999"," $40,000 - $79,999", income=="$80,000 - $119,999"," $80,000 - $119,999", income=="$120,000 - $199,999"," $120,000 - $199,999", income=="$200,000 or more","$200,000 or more") | chart count by sort_order    
So far, Splunk only supports HTTP 1.1.  Go to https://ideas.splunk.com to make a case for HTTP 2.0.
Is there any search query from which we can get the inactive users? @richgalloway @_JP 
Those fields are not present in every event.  See https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Usedefaultfields
Sadly, this didn't work.  The rename won't change the column values.  I have found a solution though, thank you.
This didn't work.  The chart doesn't respond to the sort order.  Thanks for the attempt though.