All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In addition to my previous message, I've realised that all the servers might have the same serverName in $SPLUNK_HOME/etc/system/local/server.conf - Is that the case too? [general] serverName = <You... See more...
In addition to my previous message, I've realised that all the servers might have the same serverName in $SPLUNK_HOME/etc/system/local/server.conf - Is that the case too? [general] serverName = <YourServerName> This also cannot be overwritten with an app config because its in the system/local directory, so if you want to update this then you would need to do something similar to the regenerate_guid script I posted. Reminder - That script is less than ideal so proceed with caution   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
My splunkd.log please check  
Hi @Na_Kang_Lim  You're right in that the GUID is set in $SPLUNK_HOME/etc/instance.cfg Such as [general] guid = 5e7f7ee5-ab7b-4e90-8f9e-b2cf1f86e6f3 Unfortunately the DS isnt designed to write ... See more...
Hi @Na_Kang_Lim  You're right in that the GUID is set in $SPLUNK_HOME/etc/instance.cfg Such as [general] guid = 5e7f7ee5-ab7b-4e90-8f9e-b2cf1f86e6f3 Unfortunately the DS isnt designed to write to this location as you would typically distribute apps from it.  Depending how confident you are feeling, it would be possible to write a script which you could deploy to all the clients using your DS which would write a new GUID to that file, once applied you can then remove the app.  Below I'll include snippets of an app I've used previously for this problem - Disclaimer - Its generally not advised to do this sort of thing, use extreme caution   1. Create an app on your deployment server - something like "regenerate_guid": 2. Create a bin directory within the app and create the following file called regenerate_guid.py import uuid import os import configparser INSTANCE_CFG_PATH = "/opt/splunkforwarder/etc/instance.cfg" def generate_guid(): return str(uuid.uuid4()) def write_guid_to_instance_cfg(guid): config = configparser.ConfigParser() try: config['general'] = {'guid': guid} # Always write the GUID, overwriting any previous value with open(INSTANCE_CFG_PATH, "w") as f: config.write(f) print(f"Successfully wrote new GUID to {INSTANCE_CFG_PATH}") except Exception as e: print(f"Error writing to {INSTANCE_CFG_PATH}: {e}") def main(): new_guid = generate_guid() write_guid_to_instance_cfg(new_guid) print("Regenerated and overwrote GUID.") if __name__ == "__main__": main()   3. Create inputs.conf: Create an inputs.conf file in the default/ directory of your app: [script://./bin/regenerate_guid.py] # Run once a day (86400) but in reality the app should be removed after actioned! interval = 86400 source = regenerate_guid sourcetype = regenerate_guid index = main disabled = 0   [script://./bin/regenerate_guid.py]: This defines a script input that runs the Python script. The path is relative to the app's directory structure. interval: How often the script should run (in seconds). We only want it to run once so Ive set this to 1 day, remove the app once it has run. source/sourcetype/index - Not really useful as we dont want the data it outputs but typical for a modular input. disabled = 0: Enables the input.   4. Create default/app.conf: [install] state = enabled [launcher] author = Your Name description = DANGEROUS APP: Periodically checks and regenerates the Splunk Instance GUID if it's missing. USE WITH EXTREME CAUTION! REQUIRES inputs.conf for proper deployment. version = 1.1.0 [package] id = regenerate_guid Workflow: The Deployment Server deploys the regenerate_guid app to the forwarder, configure the DS so the app causes a restart. The input should run straight away once installed and then every 24hrs, check the _internal index to check it has run.  Uninstall the app by removing the app or the client(s) from the serverclass/DS Important Considerations (Reiterated!) This approach is still potentially dangerous if mishandled! The script runs periodically. This means it will reset the GUID every 24 hours. Make sure you remove it once it has run. Permissions are critical. If the Splunk user does not have write access to instance.cfg, the script will fail. Testing is essential! Test in a non-production environment before deploying it to production, or at least deploy to a single host you can directly access to validate etc. Monitor the logs! Monitor the splunkd.log file on the forwarders to ensure the script is running correctly. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@RCavazana2023 can you please help on this ticket - https://community.splunk.com/t5/Getting-Data-In/Akamai-data-input-throwing-error/m-p/742944#M118031
Java already installed on splunk instance.
Hi @Karthikeya  Please check your splunk.log in $SPLUNK_HOME/var/log/splunk/splunkd.log for any other errors around the ModularInputs component - Do you have other errors relating to this TA-Akamai_... See more...
Hi @Karthikeya  Please check your splunk.log in $SPLUNK_HOME/var/log/splunk/splunkd.log for any other errors around the ModularInputs component - Do you have other errors relating to this TA-Akamai_SIEM? Alternatively try the following search index=_internal component=ModularInputs log_level=Error Do you see anything like "script running failed (PID 51184 exited with code 127)" ? Have you setup Java? This is required for the app to work, not having the correct java setup can cause the endpoint to initialise and thus you will get error messages. For more info on installation check out https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector#install-the-splunk-connector Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers... See more...
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers to servers. So my question is, what is the impact of this. I think missing the logs from these machines is one of the impact, as every 10-20 seconds, the IP and Hostname change, but the Instance GUID and Instance Name remains the same. And of course the obvious question, how do I fix this? Solving from the higher level of the system is prefer, like using the deployment server (which I can SSH into), since I have limited access to the servers, and probably have to TeamView them or write a guide for them. I read that the GUID is now in the `$SPLUNK_HOME/etc/instance.cfg` file, and there is probably a GUID there, which would be the same across the servers. Can I just delete the GUID line and restart the splunk service, and the deployment server would give it a new one? Can I delete the record from the Deployment Server UI and it would generated a new one and auto update the instance.cfg file of the Forwarder? I read the docs instance.cfg.conf - Splunk Documentation and it mention not to edit the file, so I am a bit confused. And I also saw that the docs mentioned that server.conf also has the GUID value, so do I have to do anything in the server.conf file?
Very kewl   Thank you Will give it a shot for sure ! 
Oh, unless you can make very strong assumptions about your data, you're in for a treat. 1. You will replace any escaped single quotes which might be in the original data. (and no, doing single backs... See more...
Oh, unless you can make very strong assumptions about your data, you're in for a treat. 1. You will replace any escaped single quotes which might be in the original data. (and no, doing single backslash negative lookback will not cut it). 2. You will not replace any unescaped double quotes from the original data (and again - finding them and properly escaping is not so easy in general case - see p.1. Long story short - don't manipulate structured data with regexes!
Hi @epw0rrell  Try the following index=* <<Your Other Search Criteria>> | lookup your_lookup_table user AS user OUTPUT src AS lookup_src | where isnotnull(lookup_src) AND src != lookup_src | table... See more...
Hi @epw0rrell  Try the following index=* <<Your Other Search Criteria>> | lookup your_lookup_table user AS user OUTPUT src AS lookup_src | where isnotnull(lookup_src) AND src != lookup_src | table user src Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Here is a working example using makeresults too | makeresults | eval _raw="EventType=\"Device\" Event=\"InstallProfileConfirmed\" User=\"sysadmin\" EnrollmentUser=\"hasubram\" DeviceFriendlyName=\"... See more...
Here is a working example using makeresults too | makeresults | eval _raw="EventType=\"Device\" Event=\"InstallProfileConfirmed\" User=\"sysadmin\" EnrollmentUser=\"hasubram\" DeviceFriendlyName=\"blabla MacBook Air macOS 15.3.2 Q6LW\" EventSource=\"Device\" EventModule=\"Devices\" EventCategory=\"Command\" EventData=\"Profile=Apple macOS Apple Intelligence Restrictions\" Event Timestamp: Mar 28 09:29:40" | append [| makeresults | eval _raw="EventType=\"Device\" Event=\"DeviceOperatingSystemChanged\" User=\"sysadmin\" EnrollmentUser=\"hasubram\" DeviceFriendlyName=\"blabla MacBook Air macOS 15.3.2 Q6LW\" EventSource=\"Device\" EventModule=\"Devices\" EventCategory=\"Assignment\" EventData=\"Device=75639\" Event Timestamp: Mar 28 09:29:29"] | kv | rex field=_raw "Event Timestamp: (?<EventTime>.+)$" | eval _time=strptime(EventTime, "%b %d %H:%M:%S") | search DeviceFriendlyName="blabla MacBook Air macOS 15.3.2 Q6LW" | eval {Event}_time=_time | stats latest(*_time) as *_time values(Event) as events by DeviceFriendlyName | where MATCH(events, "DeviceOperatingSystemChanged") AND MATCH(events, "InstallProfileConfirmed") AND DeviceOperatingSystemChanged_time < InstallProfileConfirmed_time   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @Blueochotona  Have a look at the following, does this achieve what you're looking for? index=*** <<Search Parameters>> NOT DeviceFriendlyName IN (*15.3.0*,*15.3.1*) ( (EventType="Device" Ev... See more...
Hi @Blueochotona  Have a look at the following, does this achieve what you're looking for? index=*** <<Search Parameters>> NOT DeviceFriendlyName IN (*15.3.0*,*15.3.1*) ( (EventType="Device" Event="DeviceOperatingSystemChanged") OR (EventType="Device" Event="InstallProfileConfirmed") ) | eval {Event}_time=_time | stats latest(*_time) as *_time values(Event) as events by DeviceFriendlyName | where MATCH(events, "DeviceOperatingSystemChanged") AND MATCH(events, "InstallProfileConfirmed") AND DeviceOperatingSystemChanged_time < InstallProfileConfirmed_time Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello, I have a lookup table with fields user and src. I want to table results [user src] where the src within my search != the src listed within the lookup table. So first I need to search for ma... See more...
Hello, I have a lookup table with fields user and src. I want to table results [user src] where the src within my search != the src listed within the lookup table. So first I need to search for matching user rows, then I need to compare the src of the search with the src value in the lookup file. If the src is different, I want to table the new src value from the search. Can someone help me with this?  Thanks so very much.
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers... See more...
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers to servers. So my question is, what is the impact of this. I think missing the logs from these machines is one of the impact, as every 10-20 seconds, the IP and Hostname change, but the Instance GUID and Instance Name remains the same. And of course the obvious question, how do I fix this? Solving from the higher level of the system is prefer, like using the deployment server (which I can SSH into), since I have limited access to the servers, and probably have to TeamView them or write a guide for them. I read that the GUID is now in the `$SPLUNK_HOME/etc/instance.cfg` file, and there is probably a GUID there, which would be the same across the servers. Can I just delete the GUID line and restart the splunk service, and the deployment server would give it a new one? Can I delete the record from the Deployment Server UI and it would generated a new one and auto update the instance.cfg file of the Forwarder? I read the docs instance.cfg.conf - Splunk Documentation and it mention not to edit the file, so I am a bit confused. And I also saw that the docs mentioned that server.conf also has the GUID value, so do I have to do anything in the server.conf file?
This is the splunkd file.
I solved the issue by unchecking the inputs in the app, since they are disabled by default and making sure the API permissions in Sentinel One. In my case, i just create a new service user in Sentine... See more...
I solved the issue by unchecking the inputs in the app, since they are disabled by default and making sure the API permissions in Sentinel One. In my case, i just create a new service user in Sentinel One and use the api generated from the service user. The user has the scope of access to the site.
You are correct. I had prepared and found steps for fixing indexers, but they seemed fine. The configuration for manager_uri and alike is largely based on IP (which is another topic on its own), t... See more...
You are correct. I had prepared and found steps for fixing indexers, but they seemed fine. The configuration for manager_uri and alike is largely based on IP (which is another topic on its own), the IP did not change. So endpoints should be able to reach the "modified" server (but may expect a different response). I have to dig into indexer_discovery (and alike). I did not prepare for it. To my documentation it is not configured.
how to check splunkd errors in UI?
@Karthikeya  HTTP 404: This status code means the requested resource (in this case, likely a Splunk REST API endpoint) could not be found. This could happen if the app is trying to interact with an ... See more...
@Karthikeya  HTTP 404: This status code means the requested resource (in this case, likely a Splunk REST API endpoint) could not be found. This could happen if the app is trying to interact with an endpoint that doesn’t exist or is misconfigured.   Action Forbidden: This implies that even if the endpoint exists, the user or process attempting the action lacks the necessary permissions to complete it, or the action itself is restricted.   If your API credentials (Client Token, Client Secret, Access Token, Hostname) are wrong or don't have the required permissions, it might return a 403/404 error.   Did you restart the HF after installing the add-on? Check splunkd.log for any Akamai-related errors Validate Akamai credentials and endpoint format