All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for feedback but yet again this will return uptimes regardless length (0,  1 or more). If I use where Uptime=0 it shows me uptime lengths taking 0 but it does not necessarily mean there ar... See more...
Thank you for feedback but yet again this will return uptimes regardless length (0,  1 or more). If I use where Uptime=0 it shows me uptime lengths taking 0 but it does not necessarily mean there are no 1, 2 or any different lengths while span.    I need my result to return those component_hostnames which had no different length except of 0 nothing else (no 1 or 2 or any different).  This is how I would know component is UP or DOWN during my span. 
Thanks for the reply. I'll check this out and report back!
Does it have to be via REST API?  If not, you can use the ACS API to install and manage apps.  See https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Config/ACSreqs
Hi, thank you for your reply.  I ran the following query: index=_internal source="*.log" "jsm-splunk-plugin", and here is the result.   Correct me if I ran a wrong query.    Thanks
Yes, you said that in the OP, but what is the logic behind that matching?  The query needs an algorithm it can use to pair servers with teams.  Otherwise, you're looking at creating a lookup table th... See more...
Yes, you said that in the OP, but what is the logic behind that matching?  The query needs an algorithm it can use to pair servers with teams.  Otherwise, you're looking at creating a lookup table that does the matching.
It might be helpful if you shared some sample (anonymised) events from your searches, preferably in raw format in codeblocks (using the </> button above)
What is it that you are trying to chart? The values() aggregate function with give you a multivalue field of strings with unique values from your events for each time bucket. You cannot chart strings... See more...
What is it that you are trying to chart? The values() aggregate function with give you a multivalue field of strings with unique values from your events for each time bucket. You cannot chart strings on the y-axis, they need to be numbers.
Hello, Thanks for your help,  I am hoping for a way in the search to say something like if name from first query = servername1 then name from second query = teamname1.  But, have no idea how to achi... See more...
Hello, Thanks for your help,  I am hoping for a way in the search to say something like if name from first query = servername1 then name from second query = teamname1.  But, have no idea how to achieve that. Thanks, Tom
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->In... See more...
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->Install App from File. However, for detection-as-code purposes, I need to be able to do that in a programmatic way, using an API, for CI/CD purposes. I have seen no documented way to do that, which can't be true. Surely if we can do that from the web console, there is a way to do that programmatically using an API. How do I install an app outside the Splunk instance from the REST API? Thanks
Did you, or anyone, figure this one out?
The two searches have no obvious relationship to each other.  How is Splunk to know how to match a server name to a team name?
Unless you're doing something different, sparklines show numeric values over time so there are no error messages to display.
Hi @MayurMangoli , the only way to check if a log was forwarded to an Indexers is, as @richgalloway said. to run a search on the Search head. You don't have the information of which HF data passed ... See more...
Hi @MayurMangoli , the only way to check if a log was forwarded to an Indexers is, as @richgalloway said. to run a search on the Search head. You don't have the information of which HF data passed through, but you can see if the original host sent data. If you think that's interesting to know the hostname of the HF, you could upvote my request in Splunk Ideas, that's "Under Consideration" from Splunk: ideas.splunk.com/EID-l-1731 Ciao. Giuseppe
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds w... See more...
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds with the server name.  In example, 1st Search  index="netscaler | table servername Results in a table like: servername1 servername2   2nd Search index="main | table teamname Results in a table like teamname1 teamname2   I need to make 1 table that will display the corresponding teamname to the servername.  Like If servername = servername2, display teamname2 in the same table row. Does that make sense.   Let me know if any details are needed.  Not sure how to do this one. Thanks for any help, Tom
Hi @splunklearner , I usually configure all HFs as active and receiving syslogs (no passive HFs), in this way isn't relevent if one HF is down;  but remember that a syslog source sends logs to an HF... See more...
Hi @splunklearner , I usually configure all HFs as active and receiving syslogs (no passive HFs), in this way isn't relevent if one HF is down;  but remember that a syslog source sends logs to an HF until it receives, so, if you have a very big syslog data source it will send syslogs to only one HF at a time, because LB cannot balance traffic, but anyway the LB assures the fail over. It's always better to have a LB in front of HF syslogs receivers to distribute load and manage fail over indipendently from the used protocol and the source. DNS can be used as LB only if you don't have a LB because when a receiver is down, it notices late that an HF is down than a LB like F5, and in this case you loose sone logs. If you have an LB in front of your syslog receivers you don't loss any log. Only one hint from Best practibes: don't use Splunk as syslog receiver but rsyslog (better) or syslog-ng that writes logs in files that Splunk can read; in this way you don't need an HF but you can use an UF and your server can receive logs also when Splunk is down, in addition the load on the system is less that Splunk syslog receiving. Ciao. Giuseppe
Hi @Kenny_splunk , you have to flag all the knowledge objects to reassign and then reassign them in a time. Eventually, you can flag all KO in a time. Ciao. Giuseppe
I have experience with multisite clusters so ran a single VIP which LTM to the sites, each site then LTM to the cluster of local servers.  This meant half my syslog messages crossed data centers whic... See more...
I have experience with multisite clusters so ran a single VIP which LTM to the sites, each site then LTM to the cluster of local servers.  This meant half my syslog messages crossed data centers which some would argue is not ideal but policy required site and local redundancy for the design.  Since you have UF installed you should be good from there.  This is assuming all UDP traffic which is easy to just LTM 50/50 split. DO NOT DUPLICATE messages.  Let Splunk replication factor do that for you. The TCP part will be a bit different since you need to consider things like session length and messages per session.  A load balancer will only balance different sessions so maybe something like choosing least used link? I don't have experience with TCP syslog so that's the best I can do for advice.
Ah sorry I thought you meant that could have prevented this. I tried changing it to real-time but it keeps going through all the scheduled searches....  At least it seems we are already arriving at ... See more...
Ah sorry I thought you meant that could have prevented this. I tried changing it to real-time but it keeps going through all the scheduled searches....  At least it seems we are already arriving at October 12th so I guess it is almost finished and I can go normally again tomorrow. It just seems like a very weird thing, I'll email my account managers on it to request what Splunk themselves know about this.
Yes, but can i bulk reassign? instead of having to do it one at a time?
Your searches are different between DS and manual search. ```WRONG TIME STAMP - MINUTE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname... See more...
Your searches are different between DS and manual search. ```WRONG TIME STAMP - MINUTE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname" "(?<ServiceGroup>[^\?]+)" | rename "state" AS LastStatus | eval Component = host."|".servicegroupname | search Name=* | eval c_time=strftime(Time,"%m/%d/%y %H:%M:%S") | streamstats window=1 current=f global=f values(LastStatus) as Status by Component | where LastStatus!=Status | eval Time = c_time | table _time, host, ServiceGroup, Name, Status, LastStatus ```CORRECT TIME STAMPE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname" "(?<ServiceGroup>[^\?]+)" | rename "state" AS LastStatus | eval Component = host."|".servicegroupname | search Name=* | streamstats window=1 current=f global=f values(LastStatus) as Status by Component | where LastStatus!=Status | table _time, host, ServiceGroup, Name, Status, LastStatus