All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm using Splunk Enterprise with a developer license. I have log files on my computer (access and error logs). I successfully indexed them and I can do searches. I have Splunk Security Essentials... See more...
I'm using Splunk Enterprise with a developer license. I have log files on my computer (access and error logs). I successfully indexed them and I can do searches. I have Splunk Security Essentials installed and now I want to test the previously indexed data with the given use cases from Security Essentials. I read the docs and all other stuff I found on Google but I don't get it. When I try to use "Automated Introspection" in "Data Inventory", I get no results. When I try to use "Data Source Check", I get no results. I don't know what to do. My task is to apply the given use cases on the data from the access and error logs and to evaluate if they are usable in our context. Further on I have to create own use cases to get a large spread over many use cases. All of those must be based on Kill Chains and Mitre Att&ck. I have no idea how to solve my problem with the data and how to go on with my task. Thanks in advance.
Hi all, after checking the documentation I am still not much wiser. I hope some of you have perhaps encountered the same issue and has found a solution. Following scenario: 2 sites bot... See more...
Hi all, after checking the documentation I am still not much wiser. I hope some of you have perhaps encountered the same issue and has found a solution. Following scenario: 2 sites both in europe defined as multisite Indexer Cluster Site 1 has the Cluster Master (CM) configured for the multisite IXC Site 2 is not maintained by you (external) a VM-Copy / snapshot etc. is not possible Site 1 (with the CM) of the two sites goes down or is unavailable to the other. Within the documentation it is specified that the CM should be setup immediately. If the site holding the master node fails, you lose the master's functionality. You must immediately start a new master on one of the remaining sites. Configure a stand-by master on at least one of the sites not hosting the current master. When the master site goes down, bring up a stand-by master on one of the remaining sites My questions: Why can't the master on site 2 be up and running? It has to be "brought" up... Re-configuring all peers to the new master takes time! How can this be avoided? If the "failover" CM is not known to the peers can it still be up? For references check following pages: https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Mastersitefailure https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Handlemasternodefailure https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Whathappenswhenamasternodegoesdown Thank you in advance for any hints or advices Kind regards Marco
Hello All, We currently are ingesting IIS logs that are being created in W3C format. We're using a simple folder monitor with the following Inputs.conf syntax: [monitor://C:\inetpub\logs\LogFi... See more...
Hello All, We currently are ingesting IIS logs that are being created in W3C format. We're using a simple folder monitor with the following Inputs.conf syntax: [monitor://C:\inetpub\logs\LogFiles] disabled = false recursive = true index = iis_staging sourcetype = iis ignoreOlderThan = 7d Now, our web admins want to change IIS logging from W3C to IIS format. I have installed the Splunk Add-on for Microsoft IIS app on our local deployment server, but I am concerned about existing logs in our cloud instance and what may happen to them if I switch the apps from just file monitor to the IIS app. Can the IIS app write to the same index or will I need to create a new index and take other steps to prepare for the new logging format? Thanks in advance for any advice. Best, Chris
I upgraded splunk enterprise instance from 7.3.0 to 8.0.0. All preliminary checks passed when splunk restarting. But when i accessed to the web interface,it turned out this python error: 500... See more...
I upgraded splunk enterprise instance from 7.3.0 to 8.0.0. All preliminary checks passed when splunk restarting. But when i accessed to the web interface,it turned out this python error: 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/cherrypy/cprequest.py", line 628, in respond self._do_respond(path_info) File "/app/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 659, in _do_respond self.get_resource(path_info) File "/app/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 749, in get_resource dispatch(path) File "/app/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/i18n.py", line 405, in __call_ request.lang, request.mofile, request.t = get_translation('messages', [locale]) File "/app/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/i18n.py", line 550, in get_translation t = translations.setdefault(key, SparkleTranslations(open(mofile, 'rb'))) # was GNUTranslations File "/app/splunk/lib/python3.7/gettext.py", line 259, in __init_ self._parse(fp) File "/app/splunk/lib/python3.7/gettext.py", line 420, in _parse catalog[str(msg, charset)] = str(tmsg, charset) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) Powered by CherryPy unknown
Splunk add-on builder gives us 2 options to save checkpoint File Auto File type save checkpoints to /opt/splunk/var/lib/splunk/modinputs/ Where is auto saving checkpoints?
As an admin, I created the new user and assigned him user role who is basically able to view dashboards / searches. However, when he tries to logged into Splunk he is getting "Oops. Page not fou... See more...
As an admin, I created the new user and assigned him user role who is basically able to view dashboards / searches. However, when he tries to logged into Splunk he is getting "Oops. Page not found! Click here to return to Splunk homepage." Error message. I deleted & added again , I cloned with other users who can able to access splunk with the same role. Nothing worked. What could be the issue ? Thanks
I've used Splunk Stream app to get DNS logs from a Windows DNS server. I got the logs to a Search Head instance that has the Enterprise Security app. However, I can't seem to the data, which is in ... See more...
I've used Splunk Stream app to get DNS logs from a Windows DNS server. I got the logs to a Search Head instance that has the Enterprise Security app. However, I can't seem to the data, which is in json format CIM compliant. Below is a sample message raw log: What would be the best way to make the query field CIM compliant with the query field in the DNS as mentioned here: https://docs.splunk.com/Documentation/CIM/4.15.0/User/NetworkResolutionDNS {"endtime":"2020-03-04T16:13:55.892181Z","timestamp":"2020-03-04T16:13:55.886950Z","bytes":237,"bytes_in":35,"bytes_out":202,"dest_ip":"8.8.8.8","dest_mac":"00:15:5D:FA:54:6B","dest_port":53,"flow_id":"d53fcb9a-ea29-4761-ac1a-de6ca66d31e4","host_addr":["104.115.41.252"],"hostname":["www.microsoft.com-c-3.edgekey.net","www.microsoft.com-c-3.edgekey.net.globalredir.akadns.net","e13678.dspb.akamaiedge.net"],"message_type":["QUERY","RESPONSE"],"name":["www.microsoft.com","www.microsoft.com-c-3.edgekey.net","www.microsoft.com-c-3.edgekey.net.globalredir.akadns.net","e13678.dspb.akamaiedge.net"],"protocol_stack":"ip:udp:dns","query":["www.microsoft.com"],"query_type":["A"],"reply_code":"NoError","reply_code_id":0,"response_time":5231,"src_ip":"14.33.31.16","src_mac":"AA:AA:BB:BB:00:51","src_port":65031,"time_taken":5231,"transaction_id":4481,"transport":"udp","ttl":[2265,4012,461,20]}
I need to create a report that alerts on the following: I'd like to know when and by who added a specific group to a user in AD. Any insight or help is greatly appreciated.
One of my forwarders is not connecting with the indexers. Another system that is identical is connecting just fine. I keep getting errors about the message being rejected because it's too big, but ... See more...
One of my forwarders is not connecting with the indexers. Another system that is identical is connecting just fine. I keep getting errors about the message being rejected because it's too big, but I can't figure out where to adjust the allowed message size. This error is from the indexer: 03-04-2020 15:45:26.122 +0000 ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=x.x.x.x:57186 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
Hi all, In this particular situation we'd like to use a heavy forwader to be able to pull Windows event logs from windows devices in datacenter though WMI on prem and send to Splunk cloud. Is tha... See more...
Hi all, In this particular situation we'd like to use a heavy forwader to be able to pull Windows event logs from windows devices in datacenter though WMI on prem and send to Splunk cloud. Is that possible? Thanks in advance, Erik
Hi, Any T-shooting advice appreciated. TA-threatconnect app (version 3.1.12) is no longer working on a HF at version 7.3.3. Historic data is still present prior to the upgrade but nothing sinc... See more...
Hi, Any T-shooting advice appreciated. TA-threatconnect app (version 3.1.12) is no longer working on a HF at version 7.3.3. Historic data is still present prior to the upgrade but nothing since the day of the upgrade. I cannot find any errors in the splunkd.log. Please advise. Thank you
Hello, I would like to achieve following: - I have dashboard with the timeline vizualization and would like to get the duration of each of the steps either directly displayed on the graphic, let u... See more...
Hello, I would like to achieve following: - I have dashboard with the timeline vizualization and would like to get the duration of each of the steps either directly displayed on the graphic, let us say in the middle, or at least to give it additionally as an info in the tooltip. At the moment the only thing in the tooltip is the start time and end time, from which the end user has to calculate the duration, which in my case is the key information. How would I achieve this? Kind Regards, Kamil
Hello Splunk Community, I've got logs on RHEL and CentOS servers. I'd like to be able to upload all logs from /var/log/ to Splunk. Is there a way I can do something like a cURL -T http:/// ... ... See more...
Hello Splunk Community, I've got logs on RHEL and CentOS servers. I'd like to be able to upload all logs from /var/log/ to Splunk. Is there a way I can do something like a cURL -T http:/// ... to transfer files? Pls accept my apologies if this question has already been asked and answered. I've been reading some of the questions and it seem that folks have asked whether it can be done via REST, but I don't see that the question has been answered. Thank you, Radesh
Hi guys! I'm curious if there's a tool for Splunk, that is similar to Curator in Elasticsearch, for deleting indexes, backuping and etc... Deleting indexed that are older than 1 month for exmaple... See more...
Hi guys! I'm curious if there's a tool for Splunk, that is similar to Curator in Elasticsearch, for deleting indexes, backuping and etc... Deleting indexed that are older than 1 month for exmaple... Thanks, Aleksei
Hi, I have a scenario where I need to forward data from a HF to another HF but I need it to be uncooked so the receiving HF can also forward to a 3rd party. Is this possible? Thanks!
Hi, i am trying to build a props.conf for the following log entry. The log is based on an sql run and so is a mixture of an sql output as well as text. My props.conf is; [check_script] SH... See more...
Hi, i am trying to build a props.conf for the following log entry. The log is based on an sql run and so is a mixture of an sql output as well as text. My props.conf is; [check_script] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 17 TIME_PREFIX = ^ TIME_FORMAT = %y/%m/%d %H:%M:%S TRUNCATE = 0 03/03/20 07:11:01 Events without Source entries NUM SERVICE_KEY NAME KEY EVENT_DATE_TIME EVENT_DATE_TIME START_DATE_TIME NAME 45 401 Greats 0429785 58911.3298611 03-Mar-20 07:55:00 58911.3298611 High 45 401 Greats 0429786 58911.4131944 03-Mar-20 09:55:00 58911.4131944 Men 45 401 Greats 0429787 58911.4791667 03-Mar-20 11:30:00 58911.4791667 Blind 45 401 Greats 0429788 58911.5729167 03-Mar-20 13:45:00 58911.5729167 Desert 45 401 Greats 0429789 58911.6388889 03-Mar-20 15:20:00 58911.6388889 Jaw 45 401 Greats 0429790 58911.7291667 03-Mar-20 17:30:00 58911.7291667 War 45 401 Greats 0429791 58911.8125 03-Mar-20 19:30:00 58911.8125 Men 45 401 Greats 0429792 58911.875 03-Mar-20 21:00:00 58911.875 Blind 45 401 Greats 0429793 58911.96875 03-Mar-20 23:15:00 58911.96875 First 45 401 Greats 0429794 58912.0416667 04-Mar-20 01:00:00 58912.0416667 Blood 45 401 Greats 0429795 58912.1145833 04-Mar-20 02:45:00 58912.1145833 3 45 401 Greats 0429796 58912.1909722 04-Mar-20 04:35:00 58912.1909722 Desert 12 rows selected. 03/03/20 07:11:01 Gaps in Push Schedule Last Event for Service Key: 409 03-Mar-2020 09:35:00 58911.3993056 Duration: 01:45:00 'land: Tap' Warning: Last Scheduled Asset Finishes 10 Minutes+ before Service Switch Off (or now + 48 hours)... Last Event for Service Key: 409 03-Mar-2020 09:33:00 58911.3979167 Duration: 01:45:00 'land: Tap' Warning: Last Scheduled Asset Finishes 10 Minutes+ before Service Switch Off (or now + 48 hours)... Its currently splitting on each line and associating the date in the log line with an entry.
I need to sum several dates that are on a single field to then divide it with another field to get an average date. Do you have any clues?
hi I use the complex search below As you can see, there i a subsearch linked with a join command I find a way to do the same search but without the join command I started to write this searc... See more...
hi I use the complex search below As you can see, there i a subsearch linked with a join command I find a way to do the same search but without the join command I started to write this search (see below) but I have an issue because the field "host" in wire is called "USERNAME" So I need to do | rename USERNAME as host but it doesnt works and as a consequence I am unable to do a "stats by" after Is anybody can help me?? `wire` earliest=-30d latest=now | fields USERNAME NAME Building AP_NAME | rename USERNAME as host | eval host=upper(host) | lookup toto.csv NAME as AP_NAME OUTPUT Building | eval Building=upper(Building) | stats last(AP_NAME) as "AP", last(Building) as "Geol" by host **| join host type=outer** [| search `LastLogonBoot` earliest=-30d latest=now | fields host SystemTime EventCode | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | lookup tutu.csv HOSTNAME as host output SITE BUILDING_CODE DESCRIPTION_MODEL ROOM STATUS | stats last(LastReboot) as "Last reboot date", last(NbDaysReboot) as "Days without reboot", last(DESCRIPTION_MODEL) as Model, last(SITE) as Site, last(AP_NAME) as AP, last(BUILDING_CODE) as Building, last(ROOM) as Room, last(STATUS) as Status by host ] | search "Days without reboot" > 5 | search Site = * | rename host as Hostname | table Hostname Model Status "Days without reboot" "Last reboot date" Site Building Room AP Geol | sort -"Days without reboot" [| inputlookup host.csv | table host ] (`LastLogonBoot`) OR (`wire`) earliest=-24h latest=now | fields host SystemTime EventCode USERNAME NAME AP_NAME **| rename USERNAME as host** | lookup tutu.csv NAME as AP_NAME OUTPUT Building | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | lookup toto.csv HOSTNAME as host output SITE BUILDING_CODE DESCRIPTION_MODEL ROOM STATUS | stats last(LastReboot) as "Last reboot date", last(NbDaysReboot) as "Days without reboot", last(DESCRIPTION_MODEL) as Model, last(SITE) as Site, last(BUILDING_CODE) as Building, last(ROOM) as Room, last(STATUS) as Status by host | sort -"Days without reboot"
I need to perform a subtraction between two date fields in order to get a specific age. How can I do this?
I would like to remove a mail address from my PDF scheduled report, could you please advice where i could edit schedule PDF Delivery recipients ?