All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Venality  For troubleshooting, you can start with below. Can you check your inputs.conf for the script and see if passAuth is configured with correct user or not, if not explicitly configure it ... See more...
@Venality  For troubleshooting, you can start with below. Can you check your inputs.conf for the script and see if passAuth is configured with correct user or not, if not explicitly configure it and try Eg: [script://$SPLUNK_HOME/etc/apps/SA-Phantom/bin/phantom_retry.py] interval = 60 passAuth = splunk-system-user #https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Inputsconf Ensure the user have enough capablities(Eg- admin_all_objects,list_storage_passwords) Also check $SPLUNK_HOME/var/log/splunk/python.log for any relevant error messages Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the co... See more...
I'm having an issue trying to setup an Audit Input with the server I created connecting my Splunk SOAR and Enterprise. The server is setup correctly with the authentication key and when I test the connection its good, but for some reason when I set the interval to 60 I'll just get No session key received errors coming from the phantom_retry.py script. Not sure where I'm suppose to update a key or if I'm suppose to edit a certain script when I made the server or what but I could use some assistance. Thanks!    
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any u... See more...
[monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.0.71\logs\*] disabled = false host = NJROS1BVA0621 alwaysOpenFile = 1 sourcetype = Image Importer Logs Is there a way to add a Wild card for any upcoming version updates like below? will this work? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.*\logs\*] Or does it have to be like this? [monitor://\\njros1bva0597\d$\LogFiles\warcraft-9.[0-9].[0-9][0-9]\logs\*]
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/b... See more...
Hi all, When creating a systemd unit file for and old UF (<9.1) using "splunk enable boot-start -systemd-managed 1 -user .. " a systemd file is created with content: [Service] ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunkfwd:splunkfwd /sys/fs/cgroup/memory/system.slice/%n" This is also documented in here: https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/working-with-the-universal-forwarder/manage-a-linux-least-privileged-user In "Reference unit file template".  Does anyone have an idea why this is done? The paths are using cgroupv1 which only exists on old linux systems, on up-to-date systems this chown fails, but service starts anyway.  When creating a systemd config with recent UFs these ExecStartPost Parameters are not set anymore.  BUT when installing Splunk Enterprise this line is set in systemd unit ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/system.slice/%n" AFAIK splunk core uses cgroups for Workspace Management, but not on UF. Is the reference unit file template for UF just old&false and the settings never had a sense or is there any good reason? thanks for your help and best regards, Andreas
Hi @sswigart  Just to clarify, do you intend to use the Linux host as a Deployment Server for your Windows servers? If so, yes, this will not be a problem, the Deployment Server (DS) can be used to ... See more...
Hi @sswigart  Just to clarify, do you intend to use the Linux host as a Deployment Server for your Windows servers? If so, yes, this will not be a problem, the Deployment Server (DS) can be used to deploy to Linux and Windows hosts.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
What you are meaning with “ that our deployment and indexer are on the same server”? What is this deployment, is it deployment server or something else? Have you one indexer or several and/or cluster... See more...
What you are meaning with “ that our deployment and indexer are on the same server”? What is this deployment, is it deployment server or something else? Have you one indexer or several and/or cluster? When you are deploying with IA are targets only HFs or are you managing also UFs or other HFs without IA rulesets?
What is in your props.conf?
Here are the confs that worked for us:  server.conf [general] serverName = [splunkhostname] pass4SymmKey = [pass4SymmKey] sessionTimeout = 15m [sslConfig] sslPassword = [sslPassword] sslRootCAPa... See more...
Here are the confs that worked for us:  server.conf [general] serverName = [splunkhostname] pass4SymmKey = [pass4SymmKey] sessionTimeout = 15m [sslConfig] sslPassword = [sslPassword] sslRootCAPath = /opt/splunk/etc/auth/dod_chain.pem sslPassword = [pw-hash] ### Omitting lmppol, license, kvstore, diskusage setttings ###   web.conf [settings] ### START SPLUNK WEB USING HTTPS:8443 ### enableSplunkWebSSL = 1 httpport = 8443 privKeyPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\privkey.pem serverCert = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert.pem ### TOKEN AUTHENTICATION ### requireClientCert = true sslRootCAPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\dod_chain.pem enableCertBasedUserAuth = true SSOMode = permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod = PIV certBasedUserAuthPivOidList = Microsoft Universal Principal Name allowSsoWithoutChangingServerConf = 1 ### Omitting STIG Settings (e.g., session timeout, login banner, etc).   authentication.conf ### [Omitting splunk_auth password/user policies] [authentication] authSettings = ISXX DC-01 LDAPS Authentication, ISXX LDAPS Authentication authType = LDAP [roleMap_ISXX DC-01 LDAPS Authentication] admin = Network Administrators power = Network Administrators user = Domain Admins; Network Administrators;Protected Users [roleMap_ISXX DC-02 LDAPS Authentication] admin = Network Administrators power = Network Administrators user = Domain Admins; Network Administrators;Protected Users [ISXX DC-01 LDAPS Authentication] SSLEnabled = 1 anonymous_referrals = 0 bindDN = CN=ldap.splunk,OU=Privileged Users,DC=XXXX,DC=YYYY bindDNpassword = [pw-hash] charset = utf8 emailAttribute = mail enableRangeRetrieval = 0 groupBaseDN = CN=Network Administrators,OU=Users,DC=XXXX,DC=YYYY groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = dc-01.XXXX.YYYY nestedGroups = 0 network_timeout = 29 pagelimit = -1 port = 636 realNameAttribute = cn sizelimit = 5000 timelimit = 25 userBaseDN = OU=Privileged Users,DC=XXXX,DC=YYYY userNameAttribute = userPrincipalName [ISXX DC-02 LDAPS Authentication] SSLEnabled = 1 anonymous_referrals = 0 bindDN = CN=ldap.splunk,OU=Privileged Users,DC=XXXX,DC=YYYY bindDNpassword = [pw-hash] charset = utf8 emailAttribute = mail enableRangeRetrieval = 0 groupBaseDN = CN=Network Administrators,OU=Users,DC=XXXX,DC=YYYY groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = dc-01.XXXX.YYYY nestedGroups = 0 network_timeout = 29 pagelimit = -1 port = 636 realNameAttribute = cn sizelimit = 5000 timelimit = 25 userBaseDN = OU=Privileged Users,DC=XXXX,DC=YYYY userNameAttribute = userPrincipalName
MY BAD on #3, meant to write: 3. certBasedAuthMethod = Microsoft Universal Principal Name (NOT a specific OID ) Also, here's the how-to for the NPE portal.  There has been a lot of confusi... See more...
MY BAD on #3, meant to write: 3. certBasedAuthMethod = Microsoft Universal Principal Name (NOT a specific OID ) Also, here's the how-to for the NPE portal.  There has been a lot of confusion on which certificate type, the long list CC/S/A's, and key selections. Although the CC/S/A's might vary across circuits (Westford is using DOD), the common criteria is:   Authenticate to the NPE Portal with you .DA token for instant approval   Email: Your SIPR email (I don't think this matters) Subject/CN: device DNS name (automatically appears after pasting the CSR text) Certificate Profile: TLS Server Key Usage Selections: digitalSignature, keyEnchipherment  Extended Key Usage Options: id-kp-serverAuth Subject Alternative Name: + Device DNS Hostname w/FQDN, + IP Address (the actual IP, not 127.0.0.1 as was discussed in some channels) CC/S/A: DOD (yours may be different) Validity: Will default to 1-year, manually increase to 3.
Thanks, I did have the FQDN, so the search is still on, hopefully others had the similar issue and resolution.  Any ideas on which log would show why, I'm not able to find one that would give me a hi... See more...
Thanks, I did have the FQDN, so the search is still on, hopefully others had the similar issue and resolution.  Any ideas on which log would show why, I'm not able to find one that would give me a hint.
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the se... See more...
I am standing up a Linux server to host Splunk Enterprise 9.4.3. I have 30+ windows hosts. Can I  upload Splunk Add-on for Microsoft Windows and use it to config the windows hosts even though the server is running on a Linux host?   Thank you
Hi @MatheoCaneva1  You can send data to a Splunk index using a Python script via the HTTP Event Collector (HEC). Yo uwill need to enable HEC in Splunk if not already done, create a token, and speci... See more...
Hi @MatheoCaneva1  You can send data to a Splunk index using a Python script via the HTTP Event Collector (HEC). Yo uwill need to enable HEC in Splunk if not already done, create a token, and specify the target index in the token configuration. Here's a basic Python example using the requests library to send a JSON event: python import requests import json # Replace with your values splunk_host = "https://your-splunk-instance:8088" # HEC endpoint (default port 8088) hec_token = "your-hec-token-here" index = "your_target_index" # Ensure the token allows this index # Sample event data event_data = { "event": "This is a test event from Python", "sourcetype": "mysourcetype", "index": index, "fields": { "severity": "info" } } # Send the event headers = { "Authorization": f"Splunk {hec_token}" } response = requests.post(f"{splunk_host}/services/collector/event", headers=headers, data=json.dumps(event_data)) print(response.status_code) print(response.text) This script sends a single event as JSON to the specified Splunk index, however you can send an array of events if needed. Ensure the HEC token has permissions for the target index, and the Splunk instance is reachable (handle SSL if using HTTPS). I would recommend testing with small data volumes first. Check out https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector for more info on HEC including setting up, as well as https://help.splunk.com/en/splunk-enterprise/get-data-in/collect-http-event-data/http-event-collector-examples which covers further examples.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You must get it to Splunk somehow. The easiest way would be to send events to a HEC input created on a HF or indexer.
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for ... See more...
Hi everyone! Quick question. I would like to know how can I send data to an index using a python script. We need to ingest some data without using a forwarder and I would like to use an script for this reason. Did anyone do this already? Ty! Regards.
Hey @sudha_krish  Please avoid calling out specific users on here - it wont help get your question answered. Please could you also share your props.conf config? I dont think this is correct: SOUR... See more...
Hey @sudha_krish  Please avoid calling out specific users on here - it wont help get your question answered. Please could you also share your props.conf config? I dont think this is correct: SOURCE_KEY=MetaData:orig_sourcetype Can you try: [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 WRITE_META = true [clone_for_thirdparty] SOURCE_KEY = MetaData:Index REGEX = ^test_np$ DEST_KEY = MetaData:Sourcetype CLONE_SOURCETYPE = data_to_thirdparty WRITE_META = true [sourcetype_raw_updated] INGEST_EVAL = _raw=_raw." orig_sourcetype=".orig_sourcetype Do you want orig_sourcetype adding to the raw text value?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes, this is one of the first things i've found when searching and i reset that password on both the indexer and my forwarders and still nothing 
I just want to add the orig_sourcetype to the cloned event with original sourcetype value.
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 W... See more...
I'm cloning the event and before cloning  extracting sourcetype to use later. transforms.conf [copy_original_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = orig_sourcetype::$1 WRITE_META = true [clone_for_thirdparty] SOURCE_KEY = _MetaData:Index REGEX = ^test_np$ DEST_KEY = MetaData:Sourcetype CLONE_SOURCETYPE = data_to_thirdparty WRITE_META = true [sourcetype_raw_updated] SOURCE_KEY=MetaData:orig_sourcetype REGEX=^orig_sourcetype::(.*)$ FORMAT = $1##$0 DEST_KEY=_raw But when I try to retrieve extracted original value  I'm getting nothing. Is there any way to persist original sourcetype ? @PickleRick @isoutamo @gcusello 
So we have a single search head here. I should mention that our deployment and indexer are on the same server. I am aware that best practices is to separate them. Do you think this could be it?  As f... See more...
So we have a single search head here. I should mention that our deployment and indexer are on the same server. I am aware that best practices is to separate them. Do you think this could be it?  As far as how i've configured Ingest actions I only have one rule now to drop all PerfmonMk:CPU > filter using regex >  "^PerfmonMk:CPU$" it does not seem to be dropping the data 
AFAIR the stock TA_nix extractions aren't great for auditd logs. I'd rather go with https://splunkbase.splunk.com/app/4232