All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Thanks for your help,  I am hoping for a way in the search to say something like if name from first query = servername1 then name from second query = teamname1.  But, have no idea how to achi... See more...
Hello, Thanks for your help,  I am hoping for a way in the search to say something like if name from first query = servername1 then name from second query = teamname1.  But, have no idea how to achieve that. Thanks, Tom
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->In... See more...
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->Install App from File. However, for detection-as-code purposes, I need to be able to do that in a programmatic way, using an API, for CI/CD purposes. I have seen no documented way to do that, which can't be true. Surely if we can do that from the web console, there is a way to do that programmatically using an API. How do I install an app outside the Splunk instance from the REST API? Thanks
Did you, or anyone, figure this one out?
The two searches have no obvious relationship to each other.  How is Splunk to know how to match a server name to a team name?
Unless you're doing something different, sparklines show numeric values over time so there are no error messages to display.
Hi @MayurMangoli , the only way to check if a log was forwarded to an Indexers is, as @richgalloway said. to run a search on the Search head. You don't have the information of which HF data passed ... See more...
Hi @MayurMangoli , the only way to check if a log was forwarded to an Indexers is, as @richgalloway said. to run a search on the Search head. You don't have the information of which HF data passed through, but you can see if the original host sent data. If you think that's interesting to know the hostname of the HF, you could upvote my request in Splunk Ideas, that's "Under Consideration" from Splunk: ideas.splunk.com/EID-l-1731 Ciao. Giuseppe
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds w... See more...
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds with the server name.  In example, 1st Search  index="netscaler | table servername Results in a table like: servername1 servername2   2nd Search index="main | table teamname Results in a table like teamname1 teamname2   I need to make 1 table that will display the corresponding teamname to the servername.  Like If servername = servername2, display teamname2 in the same table row. Does that make sense.   Let me know if any details are needed.  Not sure how to do this one. Thanks for any help, Tom
Hi @splunklearner , I usually configure all HFs as active and receiving syslogs (no passive HFs), in this way isn't relevent if one HF is down;  but remember that a syslog source sends logs to an HF... See more...
Hi @splunklearner , I usually configure all HFs as active and receiving syslogs (no passive HFs), in this way isn't relevent if one HF is down;  but remember that a syslog source sends logs to an HF until it receives, so, if you have a very big syslog data source it will send syslogs to only one HF at a time, because LB cannot balance traffic, but anyway the LB assures the fail over. It's always better to have a LB in front of HF syslogs receivers to distribute load and manage fail over indipendently from the used protocol and the source. DNS can be used as LB only if you don't have a LB because when a receiver is down, it notices late that an HF is down than a LB like F5, and in this case you loose sone logs. If you have an LB in front of your syslog receivers you don't loss any log. Only one hint from Best practibes: don't use Splunk as syslog receiver but rsyslog (better) or syslog-ng that writes logs in files that Splunk can read; in this way you don't need an HF but you can use an UF and your server can receive logs also when Splunk is down, in addition the load on the system is less that Splunk syslog receiving. Ciao. Giuseppe
Hi @Kenny_splunk , you have to flag all the knowledge objects to reassign and then reassign them in a time. Eventually, you can flag all KO in a time. Ciao. Giuseppe
I have experience with multisite clusters so ran a single VIP which LTM to the sites, each site then LTM to the cluster of local servers.  This meant half my syslog messages crossed data centers whic... See more...
I have experience with multisite clusters so ran a single VIP which LTM to the sites, each site then LTM to the cluster of local servers.  This meant half my syslog messages crossed data centers which some would argue is not ideal but policy required site and local redundancy for the design.  Since you have UF installed you should be good from there.  This is assuming all UDP traffic which is easy to just LTM 50/50 split. DO NOT DUPLICATE messages.  Let Splunk replication factor do that for you. The TCP part will be a bit different since you need to consider things like session length and messages per session.  A load balancer will only balance different sessions so maybe something like choosing least used link? I don't have experience with TCP syslog so that's the best I can do for advice.
Ah sorry I thought you meant that could have prevented this. I tried changing it to real-time but it keeps going through all the scheduled searches....  At least it seems we are already arriving at ... See more...
Ah sorry I thought you meant that could have prevented this. I tried changing it to real-time but it keeps going through all the scheduled searches....  At least it seems we are already arriving at October 12th so I guess it is almost finished and I can go normally again tomorrow. It just seems like a very weird thing, I'll email my account managers on it to request what Splunk themselves know about this.
Yes, but can i bulk reassign? instead of having to do it one at a time?
Your searches are different between DS and manual search. ```WRONG TIME STAMP - MINUTE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname... See more...
Your searches are different between DS and manual search. ```WRONG TIME STAMP - MINUTE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname" "(?<ServiceGroup>[^\?]+)" | rename "state" AS LastStatus | eval Component = host."|".servicegroupname | search Name=* | eval c_time=strftime(Time,"%m/%d/%y %H:%M:%S") | streamstats window=1 current=f global=f values(LastStatus) as Status by Component | where LastStatus!=Status | eval Time = c_time | table _time, host, ServiceGroup, Name, Status, LastStatus ```CORRECT TIME STAMPE``` index="netscaler" host=* | rex field="servicegroupname" "\?(?<Name>[^\?]+)" | rex field="servicegroupname" "(?<ServiceGroup>[^\?]+)" | rename "state" AS LastStatus | eval Component = host."|".servicegroupname | search Name=* | streamstats window=1 current=f global=f values(LastStatus) as Status by Component | where LastStatus!=Status | table _time, host, ServiceGroup, Name, Status, LastStatus
  Hi @ITWhisperer  Raw code 2024-10-29 20:42:43.702 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 38040 host = lgposput50341.gso.aexp.com source = /amex/app/abs-up... See more...
  Hi @ITWhisperer  Raw code 2024-10-29 20:42:43.702 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 38040 host = lgposput50341.gso.aexp.com source = /amex/app/abs-upstreamer/logs/abs-upstreamer.log sourcetype = 600000304_gg_abs_ipc2 my query: index="600000304_d_gridgain_idx*" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount   index="abc" sourcetype=600000304_gg_abs_ipc2 source!="/var/log/messages" "ArchivalProcessor - Total records processed"| rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount  
Hi @Kenny_splunk , it's possible to change the ownership of orphaned (or not) knowledge objests in [Settings > All Configurations > Reassign Knowledge Objects] and then modifying filters. Ciao. Gi... See more...
Hi @Kenny_splunk , it's possible to change the ownership of orphaned (or not) knowledge objests in [Settings > All Configurations > Reassign Knowledge Objects] and then modifying filters. Ciao. Giuseppe
Trying to find out how to show the error message(hourly) when we hover over spunk sparkline graph in a splunk dashboard. Do we have such an option for sparkline. 
Hey guys, i sometimes have the task of reassigning ownership to certain teams, and at times it can be multiple dashboards/alerts at once. I have the option to select multiple dashboards/alerts , but ... See more...
Hey guys, i sometimes have the task of reassigning ownership to certain teams, and at times it can be multiple dashboards/alerts at once. I have the option to select multiple dashboards/alerts , but when I try to reassign all at once, it doesn't work.  I remember someone mentioning that it can be done, so i wanted to talk with my favorite community. thanks again.
As mentioned, try changing the CS from continuous to real-time.
Try using tokens set to the ASCII hex value.  When written the token gets replaced by the single character.
Have you looked at the internal Splunk logs on that node for application activity.