All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improveme... See more...
Hi @Eduardo.Rosa ! I don't think this is supported as of the moment. I'm using v24.3.1-1511 of Controller and it doesn't seem to have an option for Patch HTTP method. However you can share improvements and ideas with this specific item on our idea exchange. 
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.  So the installed dates are frequently different  and sometime versions are different between all our linux hosts but are consistent between the servers of the same type (IDS, Scanner, Application, etc.)
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulner... See more...
Also, our SMEs don't all patch their servers on the same day, but they usually patch similar servers on the same day. So Monday, they might patch IDS servers, and Wednesday they might scan the vulnerability scanners and the following week they might patch all the application servers, etc.
How do you wish them to "be combined"?
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to co... See more...
The link between the two searches would be our monthly list of installed packages, after patching we gather the current list of installed packages and ingest the data into Splunk. We would like to compare the list from the current month to the lists from previous months, because not all packages have an update/patch each month. So for kernel as an example, there are frequent updates/patches and usually changes every month. But, for less frequently update/patched packages we might need to compare back two or more months. So I would want to compare the current installed packages with the last two or even as far back as six months or a year. I thought if I "joined" the list of previous installed packages that had been deduped or stats latest(version) AS previous_version, latest(_time) AS previous_installed_date by package, I could capture the last version and installed date of each package. search 1 would have the list of the current packages - package, installed date, version search 2 would have the list of the last installed date and last version of all previously installed packages with different field names for installed date and version the join would join the two lists by package name output would be package, version, installed date, last version, last installed date
An example would be the linux kernel, for instance if this month's patching process a new kernel patch was applied, we would like to find the last kernel version that was installed, the month before,... See more...
An example would be the linux kernel, for instance if this month's patching process a new kernel patch was applied, we would like to find the last kernel version that was installed, the month before, two months before or even earlier if that was the case. Kernel version change fairly regularly, but some of the other linux packages might change/update a little less frequently. After we patch we capture the list of installed packages and ingest the data into Splunk. So every month we have the data of the current installed packages, for compliance reasons, we need to verify what packages were updated during our patching process. So we are trying to compare the latest installed packages list, with the installed package lists from previous months. Our output would be something like this: package     current version    install date    previous version    previous install date kernel    ssh python glibc etc...
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the... See more...
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the DCs and the other one for the servers. The server class for the DCs includes only the inputs.conf file for Windows logs: [WinEventLog://Security] disabled = 0 index = myindex followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 whitelist = 4624,4634,4625,4728,4729 renderXml=false Moreover, in the Splunk Enterprise I configured 2 transforms for splitting the logs in two separeted indexes, like this: props.conf: [WinEventLog:Security] TRANSFORMS-security = rewrite_ad_group_management, rewrite_index_adm transforms.conf: [rewrite_ad_group_management] REGEX = EventCode=(4728|4729) DEST_KEY = _MetaData:Index FORMAT = index1 [rewrite_index_adm] REGEX = Account Name:\s+.*\.adm DEST_KEY = _MetaData:Index FORMAT = index2 In particular, the goal is to forward the authentication events (4624,4634,4625) for only admin users (Account Name:\s+.*\.adm) in index2 and only EventCode 4728 and 4729 in index1, and the events that not match none transform should remain in myindex. At the moment the first transform is not working, so I'm receiving Events 4728 and 4729 in index2, am I missing something or there is a better logic to do that? I tried to combine also 4624,4634,4625 and Account Name:\s+.*\.adm with  (?ms)EventCode=(4624|4634|4625)\X*Account Name:\s+.*\.adm Thanks in advance
Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's... See more...
Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's one of the checks I hinted. If your SH is managed by a Deployment Server, you have only to remove the App from the ServerClass where the SH is present. Ciao. Giuseppe
No. It's either a stand-alone search head or it's managed by deployer. Let me point out again that Deployer is not the same as Deployment Server.
I am having some dashboards created by Splunk Dashboard Studio. Anyone know where I could set static color based on values in the dashboard? Thanks much!
@gcusello @PickleRick @ITWhisperer  Can you kindly help to check and update on the same.
Hi, @gcusello . Sorry for my misunderstanding. The search head is managed by the deployer but the app was installed on the search head only and we just upgraded the splunk version.  
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB co... See more...
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB conifg options such as round-robin/least connections etc, and ensure its configured for Layer 4 routing and test it out. When using Splunk instances such as HF's as syslog receiver's its generally for testing and non-production enviroments. Why, because if you restart the HF you will loose data for UDP sources,  syslog is Fire and forget and Syslog as a protocol is not ideal for load balancing, so if you can live with the fact you can lose data then so be it. Other issues you can get are, data imbalance on the indexers,data not being parsing correctly as the TA's need reconfiguring to handle sourcetype / parsing when sending syslog to Splunk receiver ports. The best practise for Splunk production enviroments and syslog data are Splunk SC4S and if HA is required then look at KeepaliveD(Layer 4) or Vmotion for HA. SC4S can handle the data and apply metadata for parsing and many other features to effectivly handle common syslog data. LB and HA are two different concepts.
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|... See more...
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$#
Here's an example, you can then change to your SPL fields | makeresults | eval millis_sec = 5000 | eval seconds = millis_sec/1000 | table millis_sec, seconds
Hi @aasserhifni , infact the question of @PickleRick is the same I did some answers ago: have you a Clustered SH or a stand-alone SH? if a stand-alone SH, have you some update tools (as Ansible or... See more...
Hi @aasserhifni , infact the question of @PickleRick is the same I did some answers ago: have you a Clustered SH or a stand-alone SH? if a stand-alone SH, have you some update tools (as Ansible or GPO) or is your SH managed by a Deployment Server? Ciao. Giuseppe
Hello @kate , Below are two things that you can check: 1) index=_internal host=<<ubuntu_hf>>  ---> Check if there are any events or not. Even if there are few events, it means that connectivity is ... See more...
Hello @kate , Below are two things that you can check: 1) index=_internal host=<<ubuntu_hf>>  ---> Check if there are any events or not. Even if there are few events, it means that connectivity is established. 2) Did you restart splunk after installation of Splunk Cloud UF credentials package? If the above two approaches do not help, check for splunkd.log on the ubuntu UF instance itself. It should point out to why it is failing to send the logs to SplunkCloud. Thanks, Tejas. --- If the above solution is helpful, an upvote is appreciated.
  I am using like this.But its not mapping <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*... See more...
  I am using like this.But its not mapping <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_interface">InterfaceName IN ( "USBANK_KYRIBA_ORACLE_CE_BANKSTMTS_INOUT", "USBANK_AP_POSITIVE_PAY", "HSBC_NA_AP_ACH", "USBANK_AP_ACH", "HSBC_EU_KYRIBA_CE_BANKSTMTS_TWIST_INOUT")</set> </condition> <condition> <set token="new_interface">$interface$</set> </condition> </change> </input>  
Hello @Isaac_Hailperin , Can you share what steps have you taken so far? That would help understand what is actually missing. Thanks, Tejas.
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds... See more...
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds