All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualizati... See more...
Hi @rikinet , in addition to the perfect solution from @bowesmana , you could test the Horizon Chart add-on (https://splunkbase.splunk.com/app/3117) that gives you the requested parallel visualization. Ciao. Giuseppe
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/sp... See more...
url = "https://xyz.com/core/api-ua/user-account/stix/v2.1?isSafe=false&key=key" # Path to your custom CA bundle (optional, if you need to use a specific CA bundle) ca_bundle_path = "/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" # Make the API call through the HTTPS proxy with SSL verification response = requests.get(url, proxies=proxies, verify=ca_bundle_path) print("Response content:", response.content) If I use this code in separate python script.. It works and gives the response. However, If I use the same code in splunk, It doesn't. I get : SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:1106)')))   The code that is being used is : files = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'App-Name', 'certs') pem_files = [f"{files}/{file}" for file in os.listdir(path=files) if (file.endswith('.pem') or file.endswith('.crt'))] url = f"{url}/core/api-ua/v2/alerts/attack-surface?type=open-ports&size=1&key={api_token}" if pem_files: logger.info(f"Certificate used: {pem_files[0]}") logger.info(requests.__version__) logger.info(urllib3.__version__) logger.info(proxy_settings) response = requests.request( GET, url, verify=pem_files[0], proxies=proxy_settings ) response.raise_for_status()   In the place of verify=pem_files[0],I have added verify="/home/ubuntu/splunk/etc/apps/APP-Name/certs/ap.pem" Still same error.
Hi @ITWhisperer , Thank you for the suggestions. This seems to work.
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transa... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver application uses asynchronous message. Application: interacting with MSMQ (.\\private$\\My queue). AppDynamics Version:24.5.2 Transaction Detection: Configured automatic transaction detection rules.  Custom Match Rules: I Created custom match rules specifically for MSMQ operations but did not see the expected results. we are expecting MSMQ Entry point for .NET consumer application. I want to know how much time the data has been present in the MSMQ. I followed the instructions provided in the link below, but they didn't help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
rex just extracts the fields, now add | table count time if you want each event listed with the count and time. If you want some other representation of those values, please say what you want
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends... See more...
Hey @isoscow not sure if ideal/best practice/current, but we created alerts which dump results to a csv file using "| outputcsv", which also run a script as part their alert actions. The script sends the data from the csv to the third party ticketing system.
This is generating logs and not the expected output.  
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats co... See more...
Not sure why that is happening - does search log show anything Have you tried using appendpipe rather than append - that will run after the initial search, not before   | appendpipe [ | stats count | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]  
Another example here: Solved: Re: How to use eval if there is no result from the... - Splunk Community
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you ... See more...
As @PickleRick says, streamstats with the rolling 10m time window ... EventID=4624 ... | streamstats time_window=10m count by user | stats max(count) as max by user | sort - max | head 1 or if you also want to show the time of the 10 minute window ... | streamstats time_window=10m count by user | eventstats max(count) as max by user | where count=max | stats max(count) as max by _time user | sort - max
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrang... See more...
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrange option in the timechart command. However, I use a simple "| table _time, y1, y2, yN" instead of timechart, because I want the real timestamps in the graph, not some approximation due to timechart's notorious binning. To mimic the fixedrange behavior, I append a hidden graph with just two coordinate points (t_min|0) and (t_max|0):   ... | table _time, y1, y2, y3, ..., yN | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   This appended search appears very cheap to me - it alone runs in less than 0.5 seconds. But now I realized that it makes the overall search dramatically slower, about x10 in time. The number of scanned events explodes. This even happens when I reduce to:   | append maxout=1 [ | makeresults count=1 ]   What's going on here? I would have expected the main search to run exactly as fast as before, and the only toll should be the time required to add one more line with a timestamp to the end of the finalized table, no?
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many ... See more...
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many seconds to show the list, so 25000 would be rather useless. It's a browser issue more than a Splunk issue. You can't change any limits.conf and your parameter is not even an option. So, as @ITWhisperer says, it's better to structure your dashboard so you have some initial filter, e.g. another dropdown or a text input that is used as a filter to limit the size of the dropdown.
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the... See more...
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the logs to file in a specific path. On our Server Class server, the Data Input settings is configured to read all the files from that path (its a unique enough path) and sends it to our "network_devices" index.   So the data is being sent to the correct index, but a good portion of the logs are sent to the sourcetype=syslog, rather than the TA's sourcetype. That is where I am stuck. 
That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective co... See more...
That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective config regarding your inputs and outputs? (especially that nothing overwrites your _TCP_ROUTING) One thing I'd try would be to add the _TCP_ROUTING entries to the [default] stanza and [WinEventLog] (if applicable; I suppose in your case it's not).
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. ... See more...
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. I tried multiple times..But it failed. No error in the UI. But i can see 403 in inspection: POST https://classic.splunkbase.splunk.com/api/v0.1/app/4241/new_release/ 403 (Forbidden) Could you please let me know what is going on here?
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting l... See more...
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting login with basic auth anymore.
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any re... See more...
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any requests sent (the request headers).
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure prev... See more...
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure previously managed by another company. We are in charge of AS IS management and, at the same time, perform the migration to new environment.  The Splunk new env setup is done, so now we need to migrate data flow. Following Splunk best pratice, we need to temporarily perform a double data flow: Data must still go from log sources to old env Data must also flow from log sources to new env. We already faced, on another customer, a double data flow, managed using Route and filter data doc and support here on community. So the point is not: we don't know how it works. The issue is: something is not going as expected. So, how the current env is configured? Below, key elements: A set of HFs deployed on customer data center. A cloud HF in charge of collecting data from above HFs and other data input, like network ones. 2 different indexers: they are not on cluster, they are separated and isolated indexer. The first one collect a subset of data forwarded by cloud HF, the second one the remaining one. So, how cloud HF is configured for tcp data routing? In $SPLUNK_HOME$/etc/system/local/inputs.conf, two stanza are configured to receive data on ports 9997 and 9998; configuration is more or less:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group   Then, in $SPLUNK_HOME$/etc/system/local/outputs.conf we have:   [tcpout] defaultGroup=Indexer1_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997   So, the current behavior is: Log collected on port 9997 of Cloud HF are sent to Indexer1 Log collected on port 9998 of Cloud HF are sent to Indexer2 Everything else, like Network Input data, is sent  to Indexer1, thanks to default group settings. At this point, we need to insert new environment hosts; in particular, we need to link a new HFs set. At this phase, as already shared, we need to send data to old env and to new one. We can discuss about avoid to insert another HFs set, but there are some reason about using it and the architecture has been approved by Splunk itself. So, what we have to perform now is: All data are still sent to old Indexer1 and Indexer2. All data must be sent also to new HF set. So, how we tried to perform this? Below there is our changed configuration. inputs.conf:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group, newHFs_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group, newHFs_group      outputs.conf:   [tcpout] defaultGroup=Indexer1_group, newHFs_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997 [tcpout:newHFs_group] disabled=false server=HF1:9997, HF2:9997, HF3:9997     In a nutshell, we tried to achieve: Log collected on port 9997 of Cloud HF are sent to Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent to Indexer2 and new HFs Everything else is sent, thanks to default group settings, to Indexer1 and new HFs So, what went wrong? Log collected on port 9997 of Cloud HF are sent correctly to both Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent correctly to both Indexer2 and new HFs Remaining log are not correctly sent to both Indexer 1 and new HFs. In particular, we should see the following behavior: All logs not collected on port 9997 and 9998, like network data input, are equally sent to Indexer1 and new HFs: a copy to Indexer1 and a copy to new HFs. So, if we have outputs of N logs, we must have 2N logs sent: N to Indexer1 and N to new HFs What we are seeing is: All logs not collected on port 9997 and 9998, like network data input, are auto load balanced and splitted between Indexer1 and new HFs. So, if we have outputs of N logs, we see N sent: we have more or less 80% sent to Indexer1 and remaining 20% to new HFs. I underlined many times that some kind of logs not collected on port 9997 and 9998 are the Network ones, because we are seeing that auto lb and log splitting is happening most of all with them.
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you... See more...
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you split the results (in your case count is also a field name within the event. You can walk around the problem by renaming the field. Like | stats count as event_count by count This way the count of events will not be named count in the results but will be named event_count whereas the field by which you split the results (which comes from your events) will stay named count. Yes, it's a tiny bit confusing. Anyway, I don't see what's the relation between your data and your desired results. And your final table command is completely unnecessary at this point - your results will just contain table of fields count and time after the last stats command so the table command is not needed.
Probably the simplest (assuming the event you posted is an accurate representation of your events) is to use rex to extract the fields. | rex "count:(?<count>\d+) time:(?<time>\d+)ms"