All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. So if you split your events received by syslog into separate files based on the source device, you should configure your monitor inputs to pick different kinds of files with specific sourcetypes ... See more...
OK. So if you split your events received by syslog into separate files based on the source device, you should configure your monitor inputs to pick different kinds of files with specific sourcetypes so you don't ingest the whole big directory with all your "network" logs but instead fine tune it with subsets of the files pertaining to specific devices. If you're saving all syslog-received events to one big file - that's way harder because you can only associate one sourcetype for a given monitor input. You might try to later dynamically overwrite it during ingestion process using props and transforms but this will be way way harder than doing the splittin on the syslog-receiver level.
What is your full search?
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign cont... See more...
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign containers/tasks to them. Is there a way  to restrict visibility/assignment on other users in the platform? I know it probably have should be realted to users & roles permissions but I' not getting it right... Thanks
Hi @tuts, for configuring syslog, you should follow the instructions at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Monitornetworkports for sysmon, you should download the Splun... See more...
Hi @tuts, for configuring syslog, you should follow the instructions at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Monitornetworkports for sysmon, you should download the Splunk Addon for Sysmon and follow the instructions at https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Ciao. Giuseppe
Awesome. So often forget the "chart" command to use for such scenarios.   Thank you
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and a... See more...
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and analysis.
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the p... See more...
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the page it is showing authentication success and later  it was showing permission issue. Please help me on priority!!. Please find the attached screenshot for your reference.  error screenshot Thanks & Regards, PadmaPriya
Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualizati... See more...
Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualization. Ciao. Giuseppe
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/sp... See more...
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" # Make the API call through the HTTPS proxy with SSL verification response = requests.get(url, proxies=proxies, verify=ca_bundle_path) print("Response content:", response.content) If I use this code in separate python script.. It works and gives the response. However, If I use the same code in splunk, It doesn't. I get : SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1106)')))   The code that is being used is : files = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'App-Name', 'certs') pem_files = [f"{files}/{file}" for file in os.listdir(path=files) if (file.endswith('.pem') or file.endswith('.crt'))] url = f"{url}/core/api-ua/v2/alerts/attack-surface?type=open-ports&size=1&key={api_token}" if pem_files: logger.info(f"Certificate used: {pem_files[0]}") logger.info(requests.__version__) logger.info(urllib3.__version__) logger.info(proxy_settings) response = requests.request( GET, url, verify=pem_files[0], proxies=proxy_settings ) response.raise_for_status()   In the place of verify=pem_files[0],I have added verify="/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" Still same error.
Hi @ITWhisperer , Thank you for the suggestions. This seems to work.
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transa... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transaction Detection: Configured automatic transaction detection rules.  Custom Match Rules: I Created custom match rules specifically for MSMQ operations but did not see the expected results. we are expecting MSMQ Entry point for .NET consumer application. I want to know how much time the data has been present in the MSMQ. I followed the instructions provided in the link below, but they didn't help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
rex just extracts the fields, now add | table count time if you want each event listed with the count and time. If you want some other representation of those values, please say what you want
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends... See more...
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends the data from the csv to the third party ticketing system.
This is generating logs and not the expected output.  
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats co... See more...
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats count | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]  
Another example here: Solved: Re: How to use eval if there is no result from the... - Splunk Community
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you ... See more...
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you also want to show the time of the 10 minute window ... | streamstats time_window=10m count by user | eventstats max(count) as max by user | where count=max | stats max(count) as max by _time user | sort - max
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrang... See more...
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrange option in the timechart command. However, I use a simple "| table _time, y1, y2, yN" instead of timechart, because I want the real timestamps in the graph, not some approximation due to timechart's notorious binning. To mimic the fixedrange behavior, I append a hidden graph with just two coordinate points (t_min|0) and (t_max|0):   ... | table _time, y1, y2, y3, ..., yN | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   This appended search appears very cheap to me - it alone runs in less than 0.5 seconds. But now I realized that it makes the overall search dramatically slower, about x10 in time. The number of scanned events explodes. This even happens when I reduce to:   | append maxout=1 [ | makeresults count=1 ]   What's going on here? I would have expected the main search to run exactly as fast as before, and the only toll should be the time required to add one more line with a timestamp to the end of the finalized table, no?
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many ... See more...
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many seconds to show the list, so 25000 would be rather useless. It's a browser issue more than a Splunk issue. You can't change any limits.conf and your parameter is not even an option. So, as @ITWhisperer says, it's better to structure your dashboard so you have some initial filter, e.g. another dropdown or a text input that is used as a filter to limit the size of the dropdown.
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the... See more...
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the logs to file in a specific path. On our Server Class server, the Data Input settings is configured to read all the files from that path (its a unique enough path) and sends it to our "network_devices" index.   So the data is being sent to the correct index, but a good portion of the logs are sent to the sourcetype=syslog, rather than the TA's sourcetype. That is where I am stuck.