All Topics

Top

All Topics

Hi I have a basic questions about the inputs.conf file In our apps, we have a inputs.conf file under etc/apps/test/inputs.conf what is normal But what is the difference between etc/system/local/i... See more...
Hi I have a basic questions about the inputs.conf file In our apps, we have a inputs.conf file under etc/apps/test/inputs.conf what is normal But what is the difference between etc/system/local/inputs.conf and etc/apps/test/inputs.conf ? Is the inputs.conf file under system is an agrégation of all the inputs.conf files of every apps? And which inputs.conf file is taking into account first? The one in system or the one in the apps? Regards
If a Splunk Search Head image is destroyed, but the rest of the servers/components are up and running including the Deployment, Indexers, HWF and License server. Would it be possible to rebuild just ... See more...
If a Splunk Search Head image is destroyed, but the rest of the servers/components are up and running including the Deployment, Indexers, HWF and License server. Would it be possible to rebuild just the Search Head or does the entire Splunk environment need to be rebuilt?
Hi, recently I tried to configure Splunk TA and APP for DELL EMC ECS Storage device. After Installation and initial configuration everything looks fine except lacking performance data. Capacity and i... See more...
Hi, recently I tried to configure Splunk TA and APP for DELL EMC ECS Storage device. After Installation and initial configuration everything looks fine except lacking performance data. Capacity and inventory data are available in DELL EMC ECS App overview dashboard but I'm lacking performance data.  Here's few details about the current settings and scenario: - App and Add On are installed on Search Head Cluster, and Add on installed on HF - Splunk is 9.0.5 version (higher than reccomended in add on documentation) - App and Addons are updated to latest versions - DELL EMC ECS Device is running 3.7.0.3 - There is communication between devices (obviously as we are ingested some data from ECS device) - Splunk Python version on HF is 3.7 (OS version is 2.7 I don't think it matters :P) - Management user with system monitor privileges is created on DELL EMC ECS device and used by splunk - As documentation claims I provided IP of one of the nodes in VDC (not virtual IP) - I even raised timeout setting in  ecs_connect.py to 60 (but there were no timeout errors in the logs)   I know that performance data should be ingested via flux api and I see some data from various dell:flux:* sources but it seems not to be the date expected by Dell EMC ECS overview dashboard (or it is not parsed correctly, which I really doubt as it's official supported APP from splunkbase).   In logs I can see one error that occurs everytime when Dell EMC ECS input data ingestion happens. The error is "product version" which seems really vague for me. Apart from that I can't find anything useful in the logs.  As far as I know nothing else has to be done on the device itself so I'm kind of lost  ... Do You have any ideas what else can I check or maybe You encountered similar problems ? P.S I contacted App vendor and waiting for any response if I find solution in the meantime I'll put it here for potential other people who may encounter this really weird problem . Best Regards  
Hello, We are planning to onboard inventory data from Vcenter. I would like to confirm whether "Splunk Add-on for vCenter Logs" has inventory data. If not, can you please point me to the right add... See more...
Hello, We are planning to onboard inventory data from Vcenter. I would like to confirm whether "Splunk Add-on for vCenter Logs" has inventory data. If not, can you please point me to the right add on.  Thanks
Hello, How to fill the gaps from days with no data in tstats + timechart query? Query: | tstats count as Total where index="abc"  by _time, Type span=1d Getting: Required:   Please su... See more...
Hello, How to fill the gaps from days with no data in tstats + timechart query? Query: | tstats count as Total where index="abc"  by _time, Type span=1d Getting: Required:   Please suggest   Thank You  
I have created an Information Point which basically works so now I also want to create a metric based on the return value.- Only thing is that the return value is a Boolean so how do I cast it to an ... See more...
I have created an Information Point which basically works so now I also want to create a metric based on the return value.- Only thing is that the return value is a Boolean so how do I cast it to an Integer (with a Getter Chain I suppose)?
We are migrating our syslog server to Splunk Connect 4 Syslog running on a RHEL server inside a Docker container. The syslog messages are being forwarded to Splunk, however, SC4S is stripping the dom... See more...
We are migrating our syslog server to Splunk Connect 4 Syslog running on a RHEL server inside a Docker container. The syslog messages are being forwarded to Splunk, however, SC4S is stripping the domain name off of the device names, causing issues with interfaces that are sending log messages. For example, the host - "hostname.contoso.com", will have the host= hostname, while the hostname  "lo0.hostname.contoso.com.", will have the host = lo0. It appears SC4S is doing some sort of split on the first period (.) in the hostname and only keeping the first item in the array. Is it possible to tell SC4S to use the FQDN as the hostname? We are using reverse DNS on the SC4S instance - ie. SC4S_USE_REVERSE_DNS=yes Any help is much appreciated!!
Hello, Has anyone had issues with the color codes used in your json are not the colors appearing in your visualization?     { "type": "splunk.column", "options": { "legendDisp... See more...
Hello, Has anyone had issues with the color codes used in your json are not the colors appearing in your visualization?     { "type": "splunk.column", "options": { "legendDisplay": "off", "dataValuesDisplay": "all", "yAxisTitleText": "Volulme (GB)", "xAxisTitleText": "Day", "stackMode": "stacked", "seriesColorsByField": { "over_500_red": "#FF0000", "between_400_and_500_orange": "#FFA500", "between_200_and_400_green": "#008000", "under_200_blue": "#0000FF" } }, "dataSources": { "primary": "ds_search_1" }, "title": "License Usage - Last 14 Days", "showProgressBar": false, "showLastUpdated": false, "context": {} }     Green and blue are not in the chart below. Thanks and God bless, Genesius
Hello Splunkers, I have a index-time field extraction question, here is my raw log : wheel:x:10:user1,user2,user3 I would like to use props.conf and transforms.conf to extract the users props.... See more...
Hello Splunkers, I have a index-time field extraction question, here is my raw log : wheel:x:10:user1,user2,user3 I would like to use props.conf and transforms.conf to extract the users props.conf :     [mysourcetype] LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRANSFORMS-users = get-users     transforms.conf :     [get-users] REGEX = (\d:|,)(?<user>\w+) FORMAT = users::$1     With my current config, I will only be able to extract the first match of my regex who is here the user1. How could I extract and store each user value ? Thanks for your time, GaetanVP
I have an output of   index=feds  | fillnull value="" | table httpRequest.clientIp labels{}.name awswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL ... See more...
I have an output of   index=feds  | fillnull value="" | table httpRequest.clientIp labels{}.name awswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   wswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   wswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   But need to filter "awswaf:managed:aws:bot-control:signal:non_browser_user_agent" on Table output and see the results only on "awswaf:managed:aws:bot-control:signal:non_browser_user_agent"
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. Accordi... See more...
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. According to the Splunk documentation, the retention period is determined by the accelerated summary range. For instance, if our network traffic accelerated summary range is set to 15 days, does this imply that the retention period is also 15 days, and that it stores 15 days' worth of summaries?
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1... See more...
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1\"\s(?\d*)\s(?\d*)\"(?\S*)\"\"\w.*\"\s\S*(?web*\d*)\s\S*" | search sourceLBIP="*" responsetime="*" getorpost="*" uri="*" statuscode="*" responsesize="*" refereralURL="*" node="*" | eval responsetime1=responsetime/1000000 | stats count by responsetime1,node, responsesize, uri, _time, statuscode | sort -responsetime1 | head 1    I am trying to modify this query for more detailed information. I am able to get the top 1 URL which has highest response time. But I need the timechart partner to understand the responsetime trend for that speicifc URL for last 1 hour. Also, like to modify the script in a such a way where it sould provide me the timechart trend of any URL (top responsetime) for 1 hour. URL may not be same every time since it may change.
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to... See more...
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to settings - indexing   _audit, _internal , _introspection,  _telemtry, _history + main area all of them is disabled. I also google, and it says that it perhaps has something to do identical id under db directory. We have same id on some files with .sentinel   example: db_123_345_12 db_123_345_12.rbsentinel    If I run following command: run netsat -an | grep 9997 we have many tcp session establised .    Have of course rebooted, restarted splunk server several times.  It does not help much.  Thanks in advance. Hope someone can give me a hint.    Rgds Geir    
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggere... See more...
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggered by splunk with date time.   Thanks Abhineet Kumar
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "t... See more...
Hi, I'm trying to figure out the most recommended way to set up an index that stores data ingested in the following manner: 1) Every ~30 days a baseline of events is sent, specifying the current "truth". 2) Between baselines, small updates are ingested, specifying diffs from the previous baseline. A baseline would be around ~1 GB, and the small updates would be ~1 MB every few days. Queries on this index will build a "current state" by querying the baseline + the updates since. This would require a baseline + updates to be kept in warm buckets.  I was wondering what would be the best indexes.conf configuration for this case? My initial thought was: frozenTimePeriodInSecs=7776000 # 90 days to keep ~3 baselines maxDataSize=2000 # max size of a baseline maxWarmDBCount=30 The reason I set maxWarmDBCount to 30 was in case of an update every day, and automatic rolling from hot to warm bucket. If hot buckets can stay hot for multiple days, I could reduce this number. Any inputs? Thanks!    
Hi, I would like to know the difference between version 1 and version 2 of the stats command. Thank you Kind regards Marta
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwardi... See more...
I have a Splunk Enterprise installation and a Splunk Cloud stack. I want to migrate logging from Enterprise to Splunk Cloud. My EC2-Machines have an old Splunk Forwarder installed and are forwarding to the Splunk Enterprise instance. The log file I'm ingesting is JSON format, but each line contains a SYSLOG prefix. This prefix seems to be stripped out by Splunk Enterprise from what I can tell. The sourcetype of the log is a custom type which is NOT explicitly defined on the Splunk Enteprise server. Since the log is JSON, no explicit field extraction is needed. the log events are just JSON messages and are properly extracted. Now I've changed the outputs.conf on the EC2 machine to send the logs to Splunk Cloud. Nothing else changed. Splunk Cloud indexes the events and the SYSLOG header shows up in Splunk Cloud. Thats why the event doesn't seem to be recognized as JSON and field extraction is not working. Any idea how to tell Splunk Cloud to Strip the SYSLOG header from these events? And especially... why this was working apparently automatically on the Splunk Enterprise side? Both Splunkn installations have the Splunk Add-On for Unix installed, which seems to contain configuration for stripping SYSLOG headers from events. But I don't understand yet, how these come into action. My inputs.conf: [monitor:///var/log/slbs/tors_access.log] disabled = false blacklist = \.(gz|bz2|z|zip)$ sourcetype = tors_access index = torsindex There is no props.conf or transform.conf on the EC2 machine with the Splunk forwarder for this app (and if so, that should have kicked in when I change the output to Splunk Cloud).
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splun... See more...
Hi Team, We have 4 Search heads are in cluster in that one Search head is getting the KV store PORT issue asking that change the port remaining 3 SHs working fine. We are unable to restart the Splunk on that particular SH. If i check the SH cluster status only 3 servers are showing now. Splunk installed version: 9.0.4.1 for error visibility Please find the attached.  Regards, Siva.
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level fil... See more...
Hello everyone! We have a container service running on AWS ECS with Splunk log driver enabled (via HEC token).  At moment, we found log lines look awful (see below example). Also, no event level filtered { [-]    line: xxxxxxxxx - - [16/Sep/2023:23:59:59 +0000] "GET /health HTTP/1.1" 200 236 "-" "ELB-HealthChecker/2.0" "-"    source: stdout    tag: xxxxxxxxxxx } Show as raw text host = xxx source = xxx source = xxx sourcetype = xxxx   We would like to make changes in Splunk to ensure the events are in a better-formatted standard as following: Sep 19 03:27:09 ip-xxx.xxxx xx[16151]: xxx ERROR xx - DIST:xx.xx BAS:8 NID:w-xxxxxx RID:b FID:bxxxx WSID:xxxx host = xxx level = ERROR source = xxx sourcetype =  xxx  We do have log forwarder rule configured (logs for other services are all formatted as above) . May I get some helps to reformat logs? Much appreciated! 
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "... See more...
Hello! I want to count how many different kind of errors appeared for different services.  At the moment, I'm searching for the errors like this  Index=etc message = "error 1" OR "error 2" OR ... "error N" | chart count by instance_name, message And I've got as a result: instance_name | "error 1 for us1" | "error 1 for us2" | ... | "error 1 for usN" | Other And under those column names, it shows how many times that error appeared. How can I count them without caring about the user and only caring about the "error 1" string? I mean, I want the result to look like Instance_name | error 1 | error2 |...| errorN