All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are at least two separate issues here. One is monitoring for data that used to be ingested but is no more, regardless of the reason for it (maybe there is a configuration problem on the recevi... See more...
There are at least two separate issues here. One is monitoring for data that used to be ingested but is no more, regardless of the reason for it (maybe there is a configuration problem on the receving end, maybe the source simply stopped sending data, maybe something else). There are several apps for that on Splunkbase. For example TrackMe - https://splunkbase.splunk.com/app/4621 Another thing is finding errors coming from your inputs (expired certs, broken connections, non-responding API endpoints and so on). And this is something you'd normally look for in _internal index indeed add those you'll find primarily in splunkd.log but also specific add-ons can create their own log files. So it's a bit more complicated than just a single search to find everything that's wrong.
Quick followup question, is there a way to include another field ( as in column ) as part of the final output? For example, if i have something like below where there is another field "Priority" c... See more...
Quick followup question, is there a way to include another field ( as in column ) as part of the final output? For example, if i have something like below where there is another field "Priority" calculated using eval, how to include it in the final output?   As of now,  using the below query,  Priority doesn't show any data and thats expected because Priority is not part of our chart command.  I tried all different combos to add Priority in the chart command but couldn't figure out how to.   | eval Priorty = case(Alert like "001","P1",Alert like "002","P2") | chart count by Alert status | addtotals col=t fieldname=Count label=Total labelfield=Alert | table rule_name Count status Priority    
That does indeed seem strange. Is this the last part of the search? Do the search dashboard and search log show anything significantly changing after you add this append command?
OK. So if you split your events received by syslog into separate files based on the source device, you should configure your monitor inputs to pick different kinds of files with specific sourcetypes ... See more...
OK. So if you split your events received by syslog into separate files based on the source device, you should configure your monitor inputs to pick different kinds of files with specific sourcetypes so you don't ingest the whole big directory with all your "network" logs but instead fine tune it with subsets of the files pertaining to specific devices. If you're saving all syslog-received events to one big file - that's way harder because you can only associate one sourcetype for a given monitor input. You might try to later dynamically overwrite it during ingestion process using props and transforms but this will be way way harder than doing the splittin on the syslog-receiver level.
What is your full search?
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign cont... See more...
Hey splunkers, We are trying to implement and segregate roles in SOAR, and so we have several roles with several users in them. The problem is that every user can see all other users and assign containers/tasks to them. Is there a way  to restrict visibility/assignment on other users in the platform? I know it probably have should be realted to users & roles permissions but I' not getting it right... Thanks
Hi @tuts, for configuring syslog, you should follow the instructions at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Monitornetworkports for sysmon, you should download the Splun... See more...
Hi @tuts, for configuring syslog, you should follow the instructions at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Monitornetworkports for sysmon, you should download the Splunk Addon for Sysmon and follow the instructions at https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Ciao. Giuseppe
Awesome. So often forget the "chart" command to use for such scenarios.   Thank you
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and a... See more...
Hello Splunk Community, I am working on a project that uses Splunk, and I need your assistance in properly installing and configuring both Syslog and Sysmon to ensure efficient data collection and analysis.
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the p... See more...
Hi Team, I am unable to login to controller as it was throwing error called "Permission Issue." Earlier I was able to login to controller but currently I am unable to login. While I am signing the page it is showing authentication success and later  it was showing permission issue. Please help me on priority!!. Please find the attached screenshot for your reference.  error screenshot Thanks & Regards, PadmaPriya
Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualizati... See more...
Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualization. Ciao. Giuseppe
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/sp... See more...
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" # Make the API call through the HTTPS proxy with SSL verification response = requests.get(url, proxies=proxies, verify=ca_bundle_path) print("Response content:", response.content) If I use this code in separate python script.. It works and gives the response. However, If I use the same code in splunk, It doesn't. I get : SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1106)')))   The code that is being used is : files = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'App-Name', 'certs') pem_files = [f"{files}/{file}" for file in os.listdir(path=files) if (file.endswith('.pem') or file.endswith('.crt'))] url = f"{url}/core/api-ua/v2/alerts/attack-surface?type=open-ports&size=1&key={api_token}" if pem_files: logger.info(f"Certificate used: {pem_files[0]}") logger.info(requests.__version__) logger.info(urllib3.__version__) logger.info(proxy_settings) response = requests.request( GET, url, verify=pem_files[0], proxies=proxy_settings ) response.raise_for_status()   In the place of verify=pem_files[0],I have added verify="/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" Still same error.
Hi @ITWhisperer , Thank you for the suggestions. This seems to work.
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transa... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transaction Detection: Configured automatic transaction detection rules.  Custom Match Rules: I Created custom match rules specifically for MSMQ operations but did not see the expected results. we are expecting MSMQ Entry point for .NET consumer application. I want to know how much time the data has been present in the MSMQ. I followed the instructions provided in the link below, but they didn't help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
rex just extracts the fields, now add | table count time if you want each event listed with the count and time. If you want some other representation of those values, please say what you want
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends... See more...
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends the data from the csv to the third party ticketing system.
This is generating logs and not the expected output.  
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats co... See more...
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats count | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]  
Another example here: Solved: Re: How to use eval if there is no result from the... - Splunk Community
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you ... See more...
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you also want to show the time of the 10 minute window ... | streamstats time_window=10m count by user | eventstats max(count) as max by user | where count=max | stats max(count) as max by _time user | sort - max