You do not have to do anything to enable_insecure_login to allow external users to use your Splunk. Just add a Splunk account for them and give them the loginType URL.
Hi @AvivBenSha , as from its name, Heavy Forwarder forwards data ro the indexers, so it isn't involved in the indexing phase. In this phase, Indexers store _raw data and index events. HFs are only...
See more...
Hi @AvivBenSha , as from its name, Heavy Forwarder forwards data ro the indexers, so it isn't involved in the indexing phase. In this phase, Indexers store _raw data and index events. HFs are only involved in the input, merge, typing and parsing phases, not in indexing phases. Instead UFs are only involved in the input phase, not in the others. Ciao. Giuseppe
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I search...
See more...
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I searched in community and found that we can use en-US/account/login?loginType=splunk after changing enable_insecure_login = False in web.conf I'm little worried about the consequences after I change the above setting. Is there any way to provide access to external users without any concerns with security. Thank you in advance!
I'm looking to export Service from Splunk ITSI however, there is no direct export feature in the GUI (at least within the Services page). Is there any other way to export ITSI services?
I tried changing it but didnt worked. Somehow i managed to receive the PEM file but when applying the Certificate, its not working. any help on providing steps to configure will be much appreciated.
Heavy Forwarders parse data if they are the first full instance to see that data. It does everything an indexer would do *except* write the data to disk. HFs do not "index" any data - that's done b...
See more...
Heavy Forwarders parse data if they are the first full instance to see that data. It does everything an indexer would do *except* write the data to disk. HFs do not "index" any data - that's done by indexers.
There should be no overlap between the TIME_PREFIX and TIME_FORMAT settings. Splunk skips past TIME_PREFIX and then starts looking for text that matches TIME_FORMAT. Since "<Data Name='date'>" does...
See more...
There should be no overlap between the TIME_PREFIX and TIME_FORMAT settings. Splunk skips past TIME_PREFIX and then starts looking for text that matches TIME_FORMAT. Since "<Data Name='date'>" doesn't appear twice in a row, there is no match for the timestamp. Try these settings: TIME_PREFIX = <Data Name='date'>
MAX_TIMESTAMP_LOOKAHEAD = 100
TIME_FORMAT = %Y-%m-%d</Data><Data Name='time'>%H:%M:%S
TZ = UTC
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unl...
See more...
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unlike the universal forwarder, the heavy forwarder does part of the indexing himself. So what exactly does it index? does he segment the raw data to the tsidx file and sends them both to the indexer?
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)"
| table name
| rename name as host
| outputloo...
See more...
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)"
| table name
| rename name as host
| outputlookup append=false monitored_hosts.csv and I run your query using the monitored_hosts.csv. It works flawless! thanks once again.
Can you post the output of this command? (replace with your trial stack's name). openssl s_client -connect prd-p-xxxxx.splunkcloud.com:8088 I suspect the cert you'll see returned is from the Splu...
See more...
Can you post the output of this command? (replace with your trial stack's name). openssl s_client -connect prd-p-xxxxx.splunkcloud.com:8088 I suspect the cert you'll see returned is from the Splunk internal CA, and that the Splunk Cloud trials are not set up with a signed cert on port 8089. On a production/paid Splunk Cloud stack you'd send logs to https://http-inputs-<stack_name> .splunkcloud.com on port 443 and I've never seen an issue with certificate validation in those environments (it uses the same cert as the web interface).
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses ...
See more...
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses the TimeCreated property for eventtime (_time), and not the date and time properties that indicate when IIS served the actual webpage. An example: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-IIS-Logging' Guid='{7e8ad27f-b271-4ea2-a783-a47bde29143b}'/><EventID>6200</EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2024-04-04T12:23:43.811459900Z'/><EventRecordID>11148</EventRecordID><Correlation/><Execution ProcessID='1892' ThreadID='3044'/><Channel>Microsoft-IIS-Logging/Logs</Channel><Computer>sw2iisxft</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='EnabledFieldsFlags'>2149961727</Data><Data Name='date'>2024-04-04</Data><Data Name='time'>12:23:37</Data><Data Name='cs-username'>ER\4dy</Data><Data Name='s-sitename'>W3SVC5</Data><Data Name='s-computername'>sw2if</Data><Data Name='s-ip'>192.168.32.86</Data><Data Name='cs-method'>GET</Data><Data Name='cs-uri-stem'>/</Data><Data Name='cs-uri-query'>blockid=2&roleid=8&logid=21</Data><Data Name='sc-status'>200</Data><Data Name='sc-win32-status'>0</Data><Data Name='sc-bytes'>39600</Data><Data Name='cs-bytes'>984</Data><Data Name='time-taken'>37</Data><Data Name='s-port'>443</Data><Data Name='csUser-Agent'>Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/123.0.0.0+Safari/537.36+Edg/123.0.0.0</Data><Data Name='csCookie'>-</Data><Data Name='csReferer'>https://tsidologg/?blockid=2&roleid=8</Data><Data Name='cs-version'>-</Data><Data Name='cs-host'>-</Data><Data Name='sc-substatus'>0</Data><Data Name='CustomFields'>X-Forwarded-For - Content-Type - https on host tsidologg</Data></EventData></Event> I've tried every combination in props.conf that I can think of This should work, but doesen't.. TIME_PREFIX = <Data Name='date'>
MAX_TIMESTAMP_LOOKAHEAD = 100
TIME_FORMAT =<Data Name='date'>%Y-%m-%d</Data><Data Name='time'>%H:%M:%S</Data>
TZ = UTC Any ideas?
Hi @bhaskar5428 , You need to change the regex capture group to cover only time, like below; | rex field=_raw "\"@timestamp\":\"\d{4}-\d{2}-\d{2}[T](?<Time>\d{2}:\d{2})"
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing...
See more...
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing on the props.conf