All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

One of our servers is forwarding fine however the files aren't being written to var/log/syslog/remote. I am new to Splunk so any assistance would be appreciated.
Hello, I have to index a log file that has only the timestamp HH:MM:SS , HH:MM:SS field1 field2 ... whenever a new row is added i should merge the actual date with the log timestamp YY/MM/DD HH:MM... See more...
Hello, I have to index a log file that has only the timestamp HH:MM:SS , HH:MM:SS field1 field2 ... whenever a new row is added i should merge the actual date with the log timestamp YY/MM/DD HH:MM:SS . i wasted a whole day to combining props and transforms configuration without success, Anyone can help me to solve ? Thanks   
Hello Guys,    First of all, Happy new year     I have installed Splunk Entreprise Insights on a windows machines (win10 Server 2016/2019) and on Linux distro (ubuntu 20.04, 18.04) to try to ge... See more...
Hello Guys,    First of all, Happy new year     I have installed Splunk Entreprise Insights on a windows machines (win10 Server 2016/2019) and on Linux distro (ubuntu 20.04, 18.04) to try to get the Windows events log in my splunk instance. The Network, RAM, CPUs 's statments are working but the Windows Events log are not working ... Every time on all of the installs mentionned bellow i have this error in the Splunk web interface :  --------------------------------------- Error in 'stats' command: The aggregation specifier 'first(Adresse' is invalid. The aggregation specifier must be in [func_name]([key]) format ---------------------------------------   I also try to add sources directly from the right PATH in the personnalized sources in  the splunk Web interface like : "C:\windows\system32\blablabla\Security.evtx" , but that's not working. I'm stuck and depressive, I'm trying to get all the logs on a Windows host (in Workgroup) and I'm admin (it's a fresh install, juste for trying !)    Some "splunk guys" can help me please :'( :'( ?    Thanks you all in advance !
Hi Splunkers, Im working on Dashboard which has two panels which shows the status of Bots.First panel shows the status of all the bots and second shows the list of unsuccessful bots.My requirement i... See more...
Hi Splunkers, Im working on Dashboard which has two panels which shows the status of Bots.First panel shows the status of all the bots and second shows the list of unsuccessful bots.My requirement is to show bot in the unsuccessful panel if the latest run of the bot is not successful and if the latest run of the bot is successful then there shouldn't be any entry in the unsuccessful bots panel.Is there any option where we can compare the current status and remove the unsuccessful entries from the below panel. Attached is the screen shot for the reference.Thanks for any help here. First panel query :  index="abc" (TYPE="Run bot finished" OR TYPE="Run bot Deployed") | $bot$ | $env$ | table _time,BOT_NAME, STATUS Second Panel query: index="abc" (TYPE="Run bot finished" OR TYPE="Run bot Deployed") STATUS="Unsuccessful" | $bot$ | $env$ | table _time,BOT_NAME,STATUS  
I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the tran... See more...
I have a backlog of huge number of .csv file skipped by the UF need to be ingested manually to back fill. What is the easy and the best method. If I manually ingest from the search head will the transform and pros conf in the hf and indexers will take effect.
We have not been able to deploy Predictive and anomaly detection capabilities that can leverage MLTK as current ML algorithms in ITSI cannot account for business hour/seasonality. The question that ... See more...
We have not been able to deploy Predictive and anomaly detection capabilities that can leverage MLTK as current ML algorithms in ITSI cannot account for business hour/seasonality. The question that I have is... is there a query example that you can point me to that would show how to exclude specific date ranges such as day of the week and hour for example exclude sat and Sunday. We are trying to include in glass tables predictive score, but the thing is with the current capabilities of ITSI it's not possible to exclude time ranges.  What we want to do is leverage MLTK for that.. to create custom prediction health score model... but with exclusion of time ranges.... then publish that model back into ITSI and show health score.. I guess the question is... how do I exclude time ranges from considerations... I guess the better question is... when ITSI predicts the health score does it take into consideration time data?  If so how can I use the same algorithm, but by excluding time ranges...?   If you can point me to appropriate resource, this would greatly help me out.
The problem we are running into is, the entity import can not keep up with the transient nature of AWS cloud, where we have auto scaling AWS EC2 instances and containers which get terminated and new ... See more...
The problem we are running into is, the entity import can not keep up with the transient nature of AWS cloud, where we have auto scaling AWS EC2 instances and containers which get terminated and new ones gets created, but the new entities are not added automatically to services. How can I solve this issue in ITSI?  Any ideas? Here is a reference link for what auto scaling is.  Basically we are getting data from CloudWatch and passing it to Splunk, but the things, because thousands of containers are spinning up and down, Splunk somehow doesn't add dynamically such entities into services...  Is there anyway to overcome this? Or should I be using some other approach to monitor such types of microservices where containers spinning up and down, where one moment they can be in one node the next in another.... yet the entities do not get updated.... every time that happens...    How do other customers monitor micro-services in such case?    Any ideas?  https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html
Hi we are using different types of tools to monitor our infrastructure and i am trying to get the data from multiple tools and feed into Splunk ITSI.  I am using Zenoss and AppDynamics to send logs a... See more...
Hi we are using different types of tools to monitor our infrastructure and i am trying to get the data from multiple tools and feed into Splunk ITSI.  I am using Zenoss and AppDynamics to send logs and metrics to Splunk. Unfortunately there are some challenges I am running into. It's very difficult to prep the Zenoss and AppD logs/metrics to be consumed by Splunk in ITSI. We have yet not been able to define KPI for server availability (Device PING up/down) and Service/Process down, Port down, individual utilization etc….as without getting and normalizing all Zenoss and AppD data into Splunk, ITSI cannot be successful. In Splunk, we are missing KPI’s for PING/Availability, Processes, Port, Interface utilization, SWAP utilization in % Only meaningful KPI we are getting in Splunk is LoadAvg, CPU utilization in % and Memory utilization in%. Is there anyway to monitor this other data in Splunk ITSI? Note I know for example in Solar Winds and in New Relic there is a way to monitor availability..  Has anyone ran into this similar issues?  Or should I be using different tools for ping/availability, interface utilization, etc...  If that's the case is anyone using ITSI to link back to other tools?     
Question, we are trying to monitor disk space usage in Splunk ITSI. We are trying to use templates as much as possible in our environment.  What I am trying to understand is how doe we monitor drive... See more...
Question, we are trying to monitor disk space usage in Splunk ITSI. We are trying to use templates as much as possible in our environment.  What I am trying to understand is how doe we monitor drive space when it comes to each individual server having multiple file systems. Do we have to write multi KPI searches for each and every server/entity if we want to identify issues like a Disk space becoming full if a particular log file was writing debug messages? Do we have to write multi KIP searches if we want to identify  CPU /Memory was 100% since a runaway process or service was consuming very high CPU /Memory resources? For these types of root cause correlation what would be a good way of representing this visually?  It   looks like the deep dive does not provide this level of visibility and it seems to me that it would require manual correlation.  Is my understanding correct on this one? I guess what i am trying to say is... i want to see for example... which log file caused the disk space to go high and be able to see the log entry.. for that somehow from the same view...   Similar for the CPU... how do we see which CPU process ended up causing CPU to spike..  We see the metric based data from deep dive view, but it would be also nice to see the actual process... or drill into the metric somehow... and for it to show... why the spike has happened... Is this something that can be done from deep dive view? Or we need to create some individual manual type of correlations, and jump outside of the native ITSI deep dive functionality?          
How can we create templates for OS type like – Linux, AIX and Windows so that it can account for KPI’s around filesystems as each server can have different volumes or number of filesystems like one ... See more...
How can we create templates for OS type like – Linux, AIX and Windows so that it can account for KPI’s around filesystems as each server can have different volumes or number of filesystems like one Windows server can have C,D and E drive and the other server can have C, E, T drive so how can one template handle these differences. Is it possible to visualize multiple KPIs  in dynamic template? We are running Zenoss and inside of Zenoss what we have is literally a server that shows... disk space usage across different types of... data volumes...   Splunk is only showing one single KPI for “disk_percentUsed” while this server has 11 total filesystems.  Has anyone run into these limitations of being able to present multiple KPIs in a single dynamic template? Or ITSI is the wrong place for us to do this and instead perhaps some other Splunk tool should be used? Or do we need to create multiple dynamic templates to tackle this? We just want to visualize file systems... as part of dynamic template, and leverage dynamic templates as much as possible.   Has anyone else ran into such limitations of not being able to display file systems from Zenoss in ITSI or ran into similar type of challenges and if so.. which solution did you come up with to over come such challenges?   Or maybe ITSI is not exactly the tool that should be used for this and perhaps some other dashboard visualization inside of Splunk should be used?   So i guess if you look at the KPI you will see it shows disk_percentUsed for a specific Linux server... for example.. but the thing is Linux server has multiple drives...  in this case how do we represent all of these multiple drives as part of single disk_percentUsed KPI?  Or that's not really possible and we would need to show for each file system different KPI?  Any ideas? 
Hi question, I am running into an issue with Splunk ITSI. We have a lot of different types of entities shown in service analyzer. The problem we are running into is that entity names we are coming ... See more...
Hi question, I am running into an issue with Splunk ITSI. We have a lot of different types of entities shown in service analyzer. The problem we are running into is that entity names we are coming in as not user friendly.. showing up as instance id, we would like to have it as a user friendly name,.. and the other issues is value shown as number 9 instead of number 9 we would like to show it as either good or bad (9 is good, and not 9 is bad) In order to modify  entity name in ITSI from non friendly name to friendly name do i need to pull these values directly from AWS tag or there is other ways of doing this?
How we can accomplish infrastructure monitoring in Splunk to monitor CPU/MEM/Disk ?
Currently, my firewall logs (PaloAlto) are sent via syslog to a virtual Linux machine.  On that machine, I run a full version of Splunk (Heavy Forwarder 8.x) that sends into separate indexers. I was... See more...
Currently, my firewall logs (PaloAlto) are sent via syslog to a virtual Linux machine.  On that machine, I run a full version of Splunk (Heavy Forwarder 8.x) that sends into separate indexers. I was planning to migrate the syslog data to new Linux servers and use Universal Forwarder instead, but running into what looks like some serious performance issues.  The UF will send a big chunk of data to start, but then the index stops receiving from the UF. I tried the post at https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-ParsingQueue-KB-Size/td-p/50410 to increase the size of the parsingqueue, but that didn't help.   I'm not quite sure what to look at next.  Maybe the stream is too much for UF to handle?  I haven't found anything definitive on that subject.
Hello, Had a quick question with regards to props.conf and how it would behave.  We have a directory which has a large number of different logs and we use just one sourcetype for all (*.* in the pat... See more...
Hello, Had a quick question with regards to props.conf and how it would behave.  We have a directory which has a large number of different logs and we use just one sourcetype for all (*.* in the path in inputs.conf).   I am planning to setup the following props.conf for this sourcetype as the vast majority of the log files follow this date structure/setup.  However, a few of the logs do not.  I'm just wondering how these logs would behave?  Would they simply revert to the overall system default?  Of course I could setup separate sourcetypes for each file name if need be, but would rather continue with I have for now. SHOULD_LINEMERGE=true TIME_FORMAT=%Y-%m-%d %H:%M:%S[\.,]%3N TIME_PREFIX=^ MAX_TIMESTAMP_LOOKAHEAD=24 BREAK_ONLY_BEFORE_DATE=TRUE Thanks!  
I am a newbie to Splunk and am trying to find out what query I can use to find a specific users browsing history for a specific date and time. We use Palo Alto for our firewall. 
Hello all, looking to get both the first and last event for each user of the bellow search if anyone can help.  index=wineventlog EventCode=4624 host=machine1* user=4* OR user=5*
Hi, this is my first posting to this community I believe.  I am trying to add a new field called uri_path to an existing data model called Web. The only thing in the constraint is index=web. Names h... See more...
Hi, this is my first posting to this community I believe.  I am trying to add a new field called uri_path to an existing data model called Web. The only thing in the constraint is index=web. Names have been changed in this posting. One of the fields in the DM is url which is used to extract the url_path (different name here) via the URLParser scripted lookup. https://splunkbase.splunk.com/app/3396/#/details The above link says this below: "Scripted Lookup URLParser is also accessible as a scripted lookup. This will be useful for situations where the custom search command cannot be used like if you are building a datamodel. The scripted lookup is slower than the custom search command. ... | eval list="iana|mozilla" | lookup urlparser_lookup url list" The following SPL works great: index=web  | head 200 | eval list="iana|mozilla" | lookup urlparser_lookup url list | table url_path | eval uri_path=url_path My question is, how do you make this work when trying to add a new field called (uri_path) to the Web  DM? If I try to add the below as a calculated field using an Eval Expression, it doesn't work. There are errors related to searching the index when I try search the DM. If I remove the new field, the Dm search works fine again. Should the index line go here before the eval expression? I am thinking it shouldn't. | eval list="iana|mozilla" | lookup urlparser_lookup url list | table url_path | eval uri_path=url_path I am thinking maybe a lookup definition is needed to try it that way when adding a new field, but when I search for a file called urlparser_lookup, no file is found. It does not show up in the adding new field drop-down list for adding a new field using lookup either. I can't create a lookup definition if I can't find the lookup file. I am not sure how to implement this. Any help will be much appreciated. Regards
Hello, has anyone worked with ingest-time lookup and familiar with it? https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/IngestLookups I'm confused on where the lookup is supposed to be.  Sin... See more...
Hello, has anyone worked with ingest-time lookup and familiar with it? https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/IngestLookups I'm confused on where the lookup is supposed to be.  Since this is an ingest-time process, I would think it would need to be in the indexers, but the doc isn't too clear on it. Also regarding the actual stanza syntax, I'm trying to see if this works: Lookup command: lookup test field1 AS new_field1 field2 OUTPUT field3 [lookup-extract] INGEST_EVAL= field3=json_extract(lookup("test", json_object("field1", new_field, "field2", field2), json_array("field3")),"field3") Any help would be appreciated.
I have some problems with the time the first events showing up in splunk. When I am doing searches to all indexers, it takes about 10 to 15 seconds until the first raw events are shown up. After th... See more...
I have some problems with the time the first events showing up in splunk. When I am doing searches to all indexers, it takes about 10 to 15 seconds until the first raw events are shown up. After this it pulls the events very fast, so it goes up to million events very quick, but the start is really slow. When limit the search to only one indexer or a few, it starts faster but still not really acceptable. I checked the resource usage of all the instances even the search head is ok. It is the same on different search heads. If I search for data where I only get a few results like 20 or so, it takes ages to show up. It seems that something is waiting for more results to show up first. Not sure how I can explain that in a better way. I noticed that the  startup.handoff time is always high   It is a bit faster with data in warm, but only 1 or 2 seconds. That should be normal as warm is on SSD and cold on normal disks.    Thx
A new custom app and index was created and successfully deployed to 37 clients, as seen in the Fowarder Management interface in my Deployment Server. However, I do not see any data when searching in ... See more...
A new custom app and index was created and successfully deployed to 37 clients, as seen in the Fowarder Management interface in my Deployment Server. However, I do not see any data when searching in splunk.  Here is the stanza for the new index: [sap] repFactor = auto homePath = volume:primary/sap/db coldPath = volume:cold/sap/colddb thawedPath = /opt/splunk/var/lib/splunk/cold1/sap/thaweddb tstatsHomePath = volume:primary/sap/datamodel_summary frozenTimePeriodInSecs = 7776000 Here is the inputs.conf for the new app: [monitor:///hana/shared/*/XXX00/*/trace] sourcetype = sap-hana-trace index = sap I have checked the Splunk UF logs and don't see any errors. Any help would be much appreciated.