All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we have 3 SHC, could it be possible to add 1 SH dedicated to a special team and give admin rights only to this last one? Thanks.
Props.conf [mysourcetype] EVAL-field1=trim(field1) Field1 must contain all fields for that source type. Is there a way?    
Bonjour à tous s’il vous plaît je suis un étudiant et c’est la première fois que j’utilise splunk J’ai installé splunk enterprise sur mon windows 10 Je dois surveiller mon Active Directory (ser... See more...
Bonjour à tous s’il vous plaît je suis un étudiant et c’est la première fois que j’utilise splunk J’ai installé splunk enterprise sur mon windows 10 Je dois surveiller mon Active Directory (serveur), mais je ne trouve pas l’application et le module complémentaire que je recherche J’ai essayé quelques ajouts et je peux recevoir des données du serveur pour rechercher sur le Web, mais j’ai besoin d’un tableau de bord comme l’application splunk pour l’infrastructure Windows (fin de vie) ou l’application MS WINDOWS AD (j’ai un problème avec elle) s’il vous plaît qui peut m’aider?? Je dois terminer mon projet dès que possible
Hey guys. So i have a search which created a bar chart     | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | stats latest(DotNetVersion) as DotNetVersion by host | fillnull value... See more...
Hey guys. So i have a search which created a bar chart     | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | stats latest(DotNetVersion) as DotNetVersion by host | fillnull value="-" | eval status=case(match(DotNetVersion,"Not!"),"noncompliant",1=1,"Compliant") | chart count by status         I have tried most options in the xml but i cant get it to be green/red =/   Any ideas? Thought the ".fieldColors" would do the trick, but i think maybe my field is called "count" instead of Compliant/noncompliant
Hi, is it possible to roll specific buckets to frozen? I have some buckets which the customer wants to be deleted (don't ask why), and I would kindly ask if this is possible without stopping Splunk... See more...
Hi, is it possible to roll specific buckets to frozen? I have some buckets which the customer wants to be deleted (don't ask why), and I would kindly ask if this is possible without stopping Splunk. br Tom
I pack an splunk app by tar command in an linux host, running as a root user. As a result the owner and group owner are both 'root'. After I installed to Splunk Enterprise, I found that the depressed... See more...
I pack an splunk app by tar command in an linux host, running as a root user. As a result the owner and group owner are both 'root'. After I installed to Splunk Enterprise, I found that the depressed directory and its files are all owned by 'root['. However, other installed app directories and files are belong to 'splunk'.  So, should I su to splunk first and then pack the app file?
Honored Splunkodes, I am trying to keep track of the manpower in each of my legions, so that if any legion loses too many troops at once, I know which one to reinforce. However, I have many legion... See more...
Honored Splunkodes, I am trying to keep track of the manpower in each of my legions, so that if any legion loses too many troops at once, I know which one to reinforce. However, I have many legions, and thus I track all of their manpower without knowing which ones will be important each day. I can't leave my myrmidons without reinforcements! I'd like to generate statistical information about them at the time of graph generation. Currently I'm doing this, it's dirty but it works. I get my legion manpower by querying that index, dropping any that don't fall in the top 50. index=legions LegionName=* | timechart span=1d limit=50 count by LegionName | fields - OTHER | untable _time LegionName ManPower | outputlookup append=f mediterranean_legions.csv Then I load up my lookup: | inputlookup mediterranean_legions.csv | convert timeformat="%Y-%m-%dT%H:%M:%S" mktime(_time) as _time | bucket _time span=1d | timechart avg(ManPower) by LegionName | fields - OTHER | untable _time LegionName ManPower | streamstats global=f window=10 avg(ManPower) AS avg_value by LegionName | eval lowerBound=(-avg_value*1.25) | eval upperBound=(avg_value*1.25) | eval isOutlier=if('ManPower' < lowerBound OR 'ManPower' > upperBound, "XXX".ManPower, ManPower) | search isOutlier="XXX*" | table _time, LegionName, ManPower, * This gives me a quick idea which legions have lost (or gained) a lot of manpower each day. Now ideally, I'd like to generate standard deviation and determine if they are outliers based on z score rather than just guessing with the lower and upper bound values. If this worked, I'd get what I want. Is there a way to accomplish this? | streamstats global=f window=10 avg(ManPower) AS mp_avg by LegionName, stdev(ManPower) as mp_stdev by LegionName, max(ManPower) as mp_max by LegionName, min(ManPower) as mp_min by LegionName
How can I display _time in my results using stats command I get this field when I use "table _time" Just like the image above, I want to get the time field using stats and/or eval command The... See more...
How can I display _time in my results using stats command I get this field when I use "table _time" Just like the image above, I want to get the time field using stats and/or eval command The image below is how my time events look like.   
Hi, For our cloud-hosted API monitoring, we've implemented Error and Performance (response time) based HRs for each of our APIs and mobile app Network Requests.  The reason for the granular level ... See more...
Hi, For our cloud-hosted API monitoring, we've implemented Error and Performance (response time) based HRs for each of our APIs and mobile app Network Requests.  The reason for the granular level of monitoring is so we can tie the HRs to health status indicators on our dashboards and get a granular view of exactly which APIs are experiencing issues in a single glance in the one dashboard view. For our performance-based health rules, we have two alerting criteria - response time vs an AI established baseline (set to alert over a set number of standard deviations) as well as a static threshold (a "must not exceed" response time threshold) which is used to monitor slow performance degradation over a long period of time and in case there are response time spikes that the baseline features just think are normal. Is this a recommended approach or does the appd community/appd team think that only baseline-based thresholds are recommended for BT/Network Request perf monitoring? My concern is using static thresholds requires more maintenance over time and will be an operational burden. ^ Edited by @Ryan.Paredez for a more searchable title
Hello, If we currently have 2 local clustered indexers on 2 local sites and 1 remote same country (mainly in case of disaster), 3 shc : 2 load balanced for the 2 local sites and 1 in remote (not ... See more...
Hello, If we currently have 2 local clustered indexers on 2 local sites and 1 remote same country (mainly in case of disaster), 3 shc : 2 load balanced for the 2 local sites and 1 in remote (not accessible from users) RF=3 SF=3 Manager node is on 1st site.   Would it be interesting to use multisite clustering and to configure it with origin/total settings with same data safety?   Thanks.        
Hey community We are using Universal forwarder as a sidecar in K8S following github introduction. But the document is not clear enough and cannot guide us to integrate with server. env: ... See more...
Hey community We are using Universal forwarder as a sidecar in K8S following github introduction. But the document is not clear enough and cannot guide us to integrate with server. env: - name: SPLUNK_START_ARGS value: --accept-license - name: SPLUNK_USER value: root - name: SPLUNK_GROUP value: root - name: SPLUNK_PASSWORD value: helloworld - name: SPLUNK_CMD value: add monitor /var/log/ - name: SPLUNK_STANDALONE_URL value: splunk.company.internal   Some questions for about configurations: 1. splunk user and password:  where can we get this user and password? shall we allocate an account from splunk enterprise server? 2. SPLUNK_STANDALONE_URL:   is this splunk enterprise server URL?  is it possible to get this URL from splunk server?
Hi, I'm setting up the splunk add-on for o365 and as stated in the Splunk documentation i have to allow the communication from the add-on to the Azure servers. How do i find which azure server this... See more...
Hi, I'm setting up the splunk add-on for o365 and as stated in the Splunk documentation i have to allow the communication from the add-on to the Azure servers. How do i find which azure server this would be? Make sure that port 443 is open to allow the Splunk Add-on for Microsoft Office 365 to communicate with the Microsoft Azure servers. Best regards, J.
Using HF to forward all events to Indexer and external syslog. When using syslog with tcp all processing basically stopped as the queues filled up (and I've adjusted queue sizes already).  I haven't ... See more...
Using HF to forward all events to Indexer and external syslog. When using syslog with tcp all processing basically stopped as the queues filled up (and I've adjusted queue sizes already).  I haven't found much on Internet about this but did try UDP with the thought that is should be "send and forget" as far as the HF is concerned so it shouldn't slow data ingestion down but it still does. I'm not using a props or transforms for the syslog as I want it to send all events.  After bringing the HF up, within a few minutes the queues fill up and everything grinds to a halt. If you look at it from the local MC, you can see there is no resource load on the server and you see a little ingestion occur about every few minutes are so.  The little data that gets to the indexer gets more timestamp skewed.   I'm beating my head on that proverbial rock as this was working fine with tcp for a while and now it isn't working even using UDP. Here is my syslog outputs.conf on the HF: [syslog] defaultGroup = forwarders_syslog maxQueueSize = 10MB [syslog:forwarders_syslog] server = xx.xx.xx.xx:10514 type = udp disabled = 0 priority = <34> timestampformat = %b %e %H:%M:%S useACK=false I should also mention that there is no issue on the syslog server or the indexer, they are not taxed by any metric.  The syslog server is forwarding to another syslog via the Internet and does use tcp for that but since the incoming is written to a file, I don't see how that could impact the syslog receiving data from the HF. Any advice will be appreciated.  I've opened a case with Splunk but they have been less than responsive.
Hey, Need help. Have a client that have been running splunk for a while as root but now whats to run splunk as splunk, and there's no splunk as a user. It's a linux server Can anyone help with st... See more...
Hey, Need help. Have a client that have been running splunk for a while as root but now whats to run splunk as splunk, and there's no splunk as a user. It's a linux server Can anyone help with steps to proceed without running into permission error issues Should I useradd splunk chown -R splunk:splunk /opt/splunk and restart splunk Will appreciate suggestions  
Hi. I am having trouble figuring out how to execute this, although it's probably simple: search 1 | field 1 | join [ search 2 | field 2 ] | table field 1, field 2 each instance of field 1 will re... See more...
Hi. I am having trouble figuring out how to execute this, although it's probably simple: search 1 | field 1 | join [ search 2 | field 2 ] | table field 1, field 2 each instance of field 1 will return multiple values for field 2. i want to table both fields such that every value for field 2 is printed next to its corresponding value for field 1. The left column will have some duplicate values, the right column will have only unique values I want a table that looks like this: FIELD 1 FIELD 2 value A value 1 value A value 2 value A value 3 value B value 1 value B value 2 value C value 1 value C value 2 value C value 3   this is my actual search: index=soe_app_retail sourcetype="vg:hvlm" source="*prd/vpa*" "*NumberOfRules*" | rex field=_raw "poid=(?<field_1>\d+)" | join type=inner uid [ search index=soe_app_retail sourcetype="vg:hvlm" source="*prd/vpa*" "*upper*" |rename message as field_2] | table field_1, field_2   right now i am getting only 1 row for each field_1 value, even though I know there are multiple values for field_2 for each field_1. I think it involves MV expand but I can't figure it out
So I'm trying to chart blocked traffic(IPs) over 7 days... the purpose to help locate beaconing traffic (this has worked at a previous job but im taking it a step further by only wanting to see days ... See more...
So I'm trying to chart blocked traffic(IPs) over 7 days... the purpose to help locate beaconing traffic (this has worked at a previous job but im taking it a step further by only wanting to see days with values.... example: I would want to see results that only show, All days with values... Query works just see alot of days with 0 data Here's my query: index="pan_logs" sourcetype="pan:traffic" dest_zone="Public" src="10.11.16*" action=blocked | chart count(dest) by dest date_wday
Hello, I'm experiencing the following issue on one of my search heads (total of 3): Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated kno... See more...
Hello, I'm experiencing the following issue on one of my search heads (total of 3): Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated knowledge bundle. Please remove/disable files from knowledge bundle or increase maxBundleSize in distsearch.conf.   Why is the SH behaving like this when the others have the same config?  
I would like to create a dashboard that has field inputs so that I can share with end users who are not familiar with spunk. I just want them to present two fields that they can just provide the inpu... See more...
I would like to create a dashboard that has field inputs so that I can share with end users who are not familiar with spunk. I just want them to present two fields that they can just provide the input like an IP address and click GO and will get the results back of my particular search query.   If that's possible, what do I have to look into? Helpful if I can get a link as well. Thank you in advance.
Hi everyone, I'm new to Splunk and I try to analyse my router (dd-wrt) syslog. So I installed Splunkon ubuntu and, SA-CIM and TA-Tomato (btw. what means SA- and TA- ?). I found that most of the das... See more...
Hi everyone, I'm new to Splunk and I try to analyse my router (dd-wrt) syslog. So I installed Splunkon ubuntu and, SA-CIM and TA-Tomato (btw. what means SA- and TA- ?). I found that most of the dashboards are showing nothing or very less information. I don't know what else to do to get more than 'no data found'. Mainly I want to analysing incomming attacks, vpn connections and other. Is someone using TA-Tomato or anything else?
Hello community, on my desk, I have a pretty edgy request that is giving me quite a headache. I would need to collect (with | collect) the output of a search in a new sourcetype created dynamical... See more...
Hello community, on my desk, I have a pretty edgy request that is giving me quite a headache. I would need to collect (with | collect) the output of a search in a new sourcetype created dynamically within the search itself. Here you can find a simple ad hoc example: | makeresults | eval letter1="A", letter2="B", letter3="C" | eval variabile="NewSourcetype" | eval _raw=_time + ": " + _raw | collect index=garbage sourcetype=variabile Problem is that the event is stored under  sourcetype=variabile instead of sourcetype=NewSourcetype. Any idea how to manage such a situation? Thanks in advance for your kind support.