All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I was looking for an answer to the same problem, and I came across this older post which kind of confirmed my understanding of the issue and the available solutions: https://community.splunk.co... See more...
Hi, I was looking for an answer to the same problem, and I came across this older post which kind of confirmed my understanding of the issue and the available solutions: https://community.splunk.com/t5/Deployment-Architecture/Why-is-cluster-master-reporting-quot-Cannot-fix-search-count-as/m-p/150441/highlight/true#M5597 Short summary: hot buckets are streamed from the originating indexer to the other indexers in the cluster, but sometimes they get out of sync for various reasons and the CM starts displaying this type of errors. Two ways to fix it: either roll the buckets (via the GUI on the CM, the API endpoint or by performing a rolling restart of the peers) or wait that they naturally roll. In my case, I'll now be investigating why and how we have these "de-synchronisations". On a different note, and perhaps not completely relevant, you indicated your hot buckets to have a retention of 91 days. This seems pretty long to me (not double checked the doc on that but still). There is also the warm stage between hot and cold, I would typically have a shorter period for the hot buckets, and keep them warm for a sensible period, before rolling them cold.
Thank you very much for your help. The code that works  ------- index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host1 | tabl... See more...
Thank you very much for your help. The code that works  ------- index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host1 | table host1 | format] | dedup host | table event_type, host, ip -----------
 Thank You very much @VatsalJagani  I had to click on the version number (link), and there did open what I needed best regards Altin
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk?... See more...
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk? I got a message after I scheduled a query to move more than 150k rows into a summary index. I appreciate your help. Thank you
Hi I'm afraid that in this case there is SHC which is managed by deployer as it should. BUT then someone has installed one app into one member locally from cli or just unpack that file into correct ... See more...
Hi I'm afraid that in this case there is SHC which is managed by deployer as it should. BUT then someone has installed one app into one member locally from cli or just unpack that file into correct app folder? @aasserhifni is this assumption correct? If it it then you are in deep s...t. I have one found this kind of situation and only way how I get rid of it was just disable that app locally. It's not help even install it first by SHC Deployer and then remove it by SHC Deployer. It just sit there. I haven't had time to figure out is there any way to get rid of it from cli or other method. It seems that there is some unknown (at least for me) mechanism how SHC manage this kind of situations. Probably something with kvstore and something on filesystems and something on captain. Maybe you could try to stop whole SHC and then remove that app on member and check if it's still there after start all nodes or not. I cannot test that as that environment was quite busy production with main alerts etc. If that is not helping you, you should ask help from Splunk Support, if they have some way to figure it out? r. Ismo
I have it this way (thanks splunk/ansible-splunk) - name: Set admin access via seed when: splunk_first_run | bool block: - name: "Hash the password" command: "{{ splunk.exec }} hash-p... See more...
I have it this way (thanks splunk/ansible-splunk) - name: Set admin access via seed when: splunk_first_run | bool block: - name: "Hash the password" command: "{{ splunk.exec }} hash-passwd {{ splunk.password }}" register: hashed_pwd changed_when: hashed_pwd.rc == 0 become: yes become_user: "{{ splunk.user }}" no_log: "{{ hide_password }}" - name: "Generate user-seed.conf (Linux)" ini_file: owner: "{{ splunk.user }}" group: "{{ splunk.group }}" dest: "{{ splunk.home }}/etc/system/local/user-seed.conf" section: user_info option: "{{ item.opt }}" value: "{{ item.val }}" mode: 0644 with_items: - {opt: 'USERNAME', val: '{{ splunk.admin_user }}'} - {opt: 'HASHED_PASSWORD', val: '{{ hashed_pwd.stdout }}'} loop_control: label: "{{ item.opt }}" when: ansible_system is match("Linux") become: yes become_user: "{{ splunk.user }}" no_log: "{{ hide_password }}"  Then those user + pass information is in config file which are per environment etc. on git. All those secrets are saved by ansible-vault, so there is no passwords as plain text on your repository/inventory. You could have as many config files as you are needing. Usually one or more per environment and customer.
Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improveme... See more...
Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improvements and ideas with this specific item on our idea exchange. 
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.  So the installed dates are frequently different  and sometime versions are different between all our linux hosts but are consistent between the servers of the same type (IDS, Scanner, Application, etc.)
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.
How do you wish them to "be combined"?
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to co... See more...
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to compare the list from the current month to the lists from previous months, because not all packages have an update/patch each month. So for kernel as an example, there are frequent updates/patches and usually changes every month. But, for less frequently update/patched packages we might need to compare back two or more months. So I would want to compare the current installed packages with the last two or even as far back as six months or a year. I thought if I "joined" the list of previous installed packages that had been deduped or stats latest(version) AS previous_version, latest(_time) AS previous_installed_date by package, I could capture the last version and installed date of each package. search 1 would have the list of the current packages - package, installed date, version search 2 would have the list of the last installed date and last version of all previously installed packages with different field names for installed date and version the join would join the two lists by package name output would be package, version, installed date, last version, last installed date
An example would be the linux kernel, for instance if this month's patching process a new kernel patch was applied, we would like to find the last kernel version that was installed, the month before,... See more...
An example would be the linux kernel, for instance if this month's patching process a new kernel patch was applied, we would like to find the last kernel version that was installed, the month before, two months before or even earlier if that was the case. Kernel version change fairly regularly, but some of the other linux packages might change/update a little less frequently. After we patch we capture the list of installed packages and ingest the data into Splunk. So every month we have the data of the current installed packages, for compliance reasons, we need to verify what packages were updated during our patching process. So we are trying to compare the latest installed packages list, with the installed package lists from previous months. Our output would be something like this: package     current version    install date    previous version    previous install date kernel    ssh python glibc etc...
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the... See more...
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the DCs and the other one for the servers. The server class for the DCs includes only the inputs.conf file for Windows logs: [WinEventLog://Security] disabled = 0 index = myindex followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 whitelist = 4624,4634,4625,4728,4729 renderXml=false Moreover, in the Splunk Enterprise I configured 2 transforms for splitting the logs in two separeted indexes, like this: props.conf: [WinEventLog:Security] TRANSFORMS-security = rewrite_ad_group_management, rewrite_index_adm transforms.conf: [rewrite_ad_group_management] REGEX = EventCode=(4728|4729) DEST_KEY = _MetaData:Index FORMAT = index1 [rewrite_index_adm] REGEX = Account Name:\s+.*\.adm DEST_KEY = _MetaData:Index FORMAT = index2 In particular, the goal is to forward the authentication events (4624,4634,4625) for only admin users (Account Name:\s+.*\.adm) in index2 and only EventCode 4728 and 4729 in index1, and the events that not match none transform should remain in myindex. At the moment the first transform is not working, so I'm receiving Events 4728 and 4729 in index2, am I missing something or there is a better logic to do that? I tried to combine also 4624,4634,4625 and Account Name:\s+.*\.adm with  (?ms)EventCode=(4624|4634|4625)\X*Account Name:\s+.*\.adm Thanks in advance
Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's... See more...
Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's one of the checks I hinted. If your SH is managed by a Deployment Server, you have only to remove the App from the ServerClass where the SH is present. Ciao. Giuseppe
No. It's either a stand-alone search head or it's managed by deployer. Let me point out again that Deployer is not the same as Deployment Server.
I am having some dashboards created by Splunk Dashboard Studio. Anyone know where I could set static color based on values in the dashboard? Thanks much!
@gcusello @PickleRick @ITWhisperer  Can you kindly help to check and update on the same.
Hi, @gcusello . Sorry for my misunderstanding. The search head is managed by the deployer but the app was installed on the search head only and we just upgraded the splunk version.  
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB co... See more...
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB conifg options such as round-robin/least connections etc, and ensure its configured for Layer 4 routing and test it out. When using Splunk instances such as HF's as syslog receiver's its generally for testing and non-production enviroments. Why, because if you restart the HF you will loose data for UDP sources,  syslog is Fire and forget and Syslog as a protocol is not ideal for load balancing, so if you can live with the fact you can lose data then so be it. Other issues you can get are, data imbalance on the indexers,data not being parsing correctly as the TA's need reconfiguring to handle sourcetype / parsing when sending syslog to Splunk receiver ports. The best practise for Splunk production enviroments and syslog data are Splunk SC4S and if HA is required then look at KeepaliveD(Layer 4) or Vmotion for HA. SC4S can handle the data and apply metadata for parsing and many other features to effectivly handle common syslog data. LB and HA are two different concepts.
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|... See more...
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$#