All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this (after finding all events) | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:" | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:" | rex... See more...
Try something like this (after finding all events) | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:" | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:" | rex field=_raw "Restart Pending event from command, (?<Failure>.*?) Workid" | eval Step=coalesce(Step,coalesce(Success, Failure)) | stats count(eval(if(Step==Success,1,null()))) as Success count(eval(if(Step==Failure,1,null()))) as Failure by Step
@ITWhisperer , Query1 has a field extracted as Step. So in this Step field which we have extracted as the information "Validation, Creation, Compliance Portal Report etc., with count So the same inf... See more...
@ITWhisperer , Query1 has a field extracted as Step. So in this Step field which we have extracted as the information "Validation, Creation, Compliance Portal Report etc., with count So the same information Validation, Creation and Compliance Portal Report with Success count needs to be pulled using the second query and the failure ones needs to be extracted using the 3rd query. The output should be something like this(Combining all 3 queries): Step (Count)                                                      Success(Count)     Failure (Count) Validation                                                                     3                                           2 Creation                                                                       2                                          2 Compliance Report Portal                                  2                                          2\ So kindly help with the query.      
Hi @kiran_panchavat renaming DS "app/local" to "app/local.OLD" is enough? Thanks.  
@jaibalaramanIf the time is in milliseconds, microseconds, or nanoseconds you must convert the time into seconds. You can use the pow function to convert the number. To convert from milliseconds t... See more...
@jaibalaramanIf the time is in milliseconds, microseconds, or nanoseconds you must convert the time into seconds. You can use the pow function to convert the number. To convert from milliseconds to seconds, divide the number by 1000 or 10^3. To convert from microseconds to seconds, divide the number by 10^6. To convert from nanoseconds to seconds, divide the number by 10^9. Date and Time functions - Splunk Documentation *** If the above solution helps, an upvote is appreciated. ***   
@splunkreal  The configuration file props.conf will be stored in $SPLUNK_HOME/etc/apps(Heavy forwarder) if you generate it on the deployment server at $SPLUNK_HOME/etc/deployment-apps and send it to... See more...
@splunkreal  The configuration file props.conf will be stored in $SPLUNK_HOME/etc/apps(Heavy forwarder) if you generate it on the deployment server at $SPLUNK_HOME/etc/deployment-apps and send it to the heavy forwarders via the deployment server. Please remove the file before pushing it from the deployment server; otherwise, it will not automatically appear in the /etc/apps (heavy forwarder) when the deployment server is reloaded. It will surpass. https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Createdeploymentapps  How to edit a configuration file - Splunk Documentation 
Hi @Muhammad Husnain.Ashfaq, Thank you for coming back to the community and sharing the solution! 
Hi @WILLIAM.GREENE, Have you had a chance to review the reply above? Can you confirm if this has helped you? If not, please jump back into the conversation to keep it going. 
Hello, if we have on DS "app/local" with conf files, is that possible restarting it that it pushes DS "app/local" to HF "app/local" and deletes custom local conf files on HF (created from HF GUI)? ... See more...
Hello, if we have on DS "app/local" with conf files, is that possible restarting it that it pushes DS "app/local" to HF "app/local" and deletes custom local conf files on HF (created from HF GUI)? Thanks.
Yes. Starts with $7. Thanks for the reply
Hi, I was looking for an answer to the same problem, and I came across this older post which kind of confirmed my understanding of the issue and the available solutions: https://community.splunk.co... See more...
Hi, I was looking for an answer to the same problem, and I came across this older post which kind of confirmed my understanding of the issue and the available solutions: https://community.splunk.com/t5/Deployment-Architecture/Why-is-cluster-master-reporting-quot-Cannot-fix-search-count-as/m-p/150441/highlight/true#M5597 Short summary: hot buckets are streamed from the originating indexer to the other indexers in the cluster, but sometimes they get out of sync for various reasons and the CM starts displaying this type of errors. Two ways to fix it: either roll the buckets (via the GUI on the CM, the API endpoint or by performing a rolling restart of the peers) or wait that they naturally roll. In my case, I'll now be investigating why and how we have these "de-synchronisations". On a different note, and perhaps not completely relevant, you indicated your hot buckets to have a retention of 91 days. This seems pretty long to me (not double checked the doc on that but still). There is also the warm stage between hot and cold, I would typically have a shorter period for the hot buckets, and keep them warm for a sensible period, before rolling them cold.
Thank you very much for your help. The code that works  ------- index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host1 | tabl... See more...
Thank you very much for your help. The code that works  ------- index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host1 | table host1 | format] | dedup host | table event_type, host, ip -----------
 Thank You very much @VatsalJagani  I had to click on the version number (link), and there did open what I needed best regards Altin
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk?... See more...
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk? I got a message after I scheduled a query to move more than 150k rows into a summary index. I appreciate your help. Thank you
Hi I'm afraid that in this case there is SHC which is managed by deployer as it should. BUT then someone has installed one app into one member locally from cli or just unpack that file into correct ... See more...
Hi I'm afraid that in this case there is SHC which is managed by deployer as it should. BUT then someone has installed one app into one member locally from cli or just unpack that file into correct app folder? @aasserhifni is this assumption correct? If it it then you are in deep s...t. I have one found this kind of situation and only way how I get rid of it was just disable that app locally. It's not help even install it first by SHC Deployer and then remove it by SHC Deployer. It just sit there. I haven't had time to figure out is there any way to get rid of it from cli or other method. It seems that there is some unknown (at least for me) mechanism how SHC manage this kind of situations. Probably something with kvstore and something on filesystems and something on captain. Maybe you could try to stop whole SHC and then remove that app on member and check if it's still there after start all nodes or not. I cannot test that as that environment was quite busy production with main alerts etc. If that is not helping you, you should ask help from Splunk Support, if they have some way to figure it out? r. Ismo
I have it this way (thanks splunk/ansible-splunk) - name: Set admin access via seed when: splunk_first_run | bool block: - name: "Hash the password" command: "{{ splunk.exec }} hash-p... See more...
I have it this way (thanks splunk/ansible-splunk) - name: Set admin access via seed when: splunk_first_run | bool block: - name: "Hash the password" command: "{{ splunk.exec }} hash-passwd {{ splunk.password }}" register: hashed_pwd changed_when: hashed_pwd.rc == 0 become: yes become_user: "{{ splunk.user }}" no_log: "{{ hide_password }}" - name: "Generate user-seed.conf (Linux)" ini_file: owner: "{{ splunk.user }}" group: "{{ splunk.group }}" dest: "{{ splunk.home }}/etc/system/local/user-seed.conf" section: user_info option: "{{ item.opt }}" value: "{{ item.val }}" mode: 0644 with_items: - {opt: 'USERNAME', val: '{{ splunk.admin_user }}'} - {opt: 'HASHED_PASSWORD', val: '{{ hashed_pwd.stdout }}'} loop_control: label: "{{ item.opt }}" when: ansible_system is match("Linux") become: yes become_user: "{{ splunk.user }}" no_log: "{{ hide_password }}"  Then those user + pass information is in config file which are per environment etc. on git. All those secrets are saved by ansible-vault, so there is no passwords as plain text on your repository/inventory. You could have as many config files as you are needing. Usually one or more per environment and customer.
Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improveme... See more...
Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improvements and ideas with this specific item on our idea exchange. 
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.  So the installed dates are frequently different  and sometime versions are different between all our linux hosts but are consistent between the servers of the same type (IDS, Scanner, Application, etc.)
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.
How do you wish them to "be combined"?
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to co... See more...
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to compare the list from the current month to the lists from previous months, because not all packages have an update/patch each month. So for kernel as an example, there are frequent updates/patches and usually changes every month. But, for less frequently update/patched packages we might need to compare back two or more months. So I would want to compare the current installed packages with the last two or even as far back as six months or a year. I thought if I "joined" the list of previous installed packages that had been deduped or stats latest(version) AS previous_version, latest(_time) AS previous_installed_date by package, I could capture the last version and installed date of each package. search 1 would have the list of the current packages - package, installed date, version search 2 would have the list of the last installed date and last version of all previously installed packages with different field names for installed date and version the join would join the two lists by package name output would be package, version, installed date, last version, last installed date