Activity Feed
- Got Karma for How to handle CIM upgrades in regards to /opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/*json files?. 01-12-2021 10:49 AM
- Got Karma for Upgraded from 7.0.5 to 7.3.2 and |datamodel searches now fail - anyone ever see this issue?. 06-05-2020 12:50 AM
- Got Karma for Re: Splunk Enterprise Security new log ERRORs after upgrading enterprise to 7.3.3 and ESS from 5.0.1 to 5.3.1. 06-05-2020 12:50 AM
- Posted Is it a valid configuration to have indexers on different IP subnets/vlans for a single site in a multisite cluster? on Getting Data In. 02-28-2020 07:59 AM
- Tagged Is it a valid configuration to have indexers on different IP subnets/vlans for a single site in a multisite cluster? on Getting Data In. 02-28-2020 07:59 AM
- Tagged Is it a valid configuration to have indexers on different IP subnets/vlans for a single site in a multisite cluster? on Getting Data In. 02-28-2020 07:59 AM
- Posted Re: Splunk Enterprise Security new log ERRORs after upgrading enterprise to 7.3.3 and ESS from 5.0.1 to 5.3.1 on Splunk Enterprise Security. 02-07-2020 08:50 AM
- Posted Re: How to handle CIM upgrades in regards to /opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/*json files? on All Apps and Add-ons. 12-16-2019 09:06 AM
- Posted Re: Upgraded from 7.0.5 to 7.3.3 and now get TsidxStats ERRORs in splunkd.log on Splunk Enterprise. 12-16-2019 07:08 AM
- Posted Splunk Enterprise Security new log ERRORs after upgrading enterprise to 7.3.3 and ESS from 5.0.1 to 5.3.1 on Splunk Enterprise Security. 12-13-2019 04:57 PM
- Tagged Splunk Enterprise Security new log ERRORs after upgrading enterprise to 7.3.3 and ESS from 5.0.1 to 5.3.1 on Splunk Enterprise Security. 12-13-2019 04:57 PM
- Tagged Splunk Enterprise Security new log ERRORs after upgrading enterprise to 7.3.3 and ESS from 5.0.1 to 5.3.1 on Splunk Enterprise Security. 12-13-2019 04:57 PM
- Posted Upgraded from 7.0.5 to 7.3.3 and now get TsidxStats ERRORs in splunkd.log on Splunk Enterprise. 12-13-2019 04:50 PM
- Tagged Upgraded from 7.0.5 to 7.3.3 and now get TsidxStats ERRORs in splunkd.log on Splunk Enterprise. 12-13-2019 04:50 PM
- Posted How to handle CIM upgrades in regards to /opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/*json files? on All Apps and Add-ons. 12-13-2019 04:39 PM
- Tagged How to handle CIM upgrades in regards to /opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/*json files? on All Apps and Add-ons. 12-13-2019 04:39 PM
- Posted Re: Upgraded from 7.0.5 to 7.3.2 and |datamodel searches now fail - anyone ever see this issue? on Splunk Enterprise Security. 12-13-2019 10:48 AM
- Posted Re: Upgraded from 7.0.5 to 7.3.2 and |datamodel searches now fail - anyone ever see this issue? on Splunk Enterprise Security. 12-13-2019 09:52 AM
- Posted Re: Upgraded from 7.0.5 to 7.3.2 and |datamodel searches now fail - anyone ever see this issue? on Splunk Enterprise Security. 12-13-2019 08:22 AM
- Posted Upgraded from 7.0.5 to 7.3.2 and |datamodel searches now fail - anyone ever see this issue? on Splunk Enterprise Security. 12-12-2019 07:59 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
1 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 |
02-28-2020
07:59 AM
We have nine sites in a multi-site cluster with indexers at each site ranging from three to 15 servers. Each site's indexers are all on the same vlan and ip subnet for for their region. I have a need to expand one of the sites with more indexers but the vlan has run out of IP addresses. Is it possible to just create a new vlan to use a different IP subnet range and add these new indexers to the previously configured site? For example site2's indexers are in vlan 2 on ip subnet range 10.1.1.0/24. Can I add 6 new indexers to site2 with those new servers in vlan 3 on ip subnet range 11.1.1.0/24?
I looked over the documentation and didn't see a requirement indexers in a site for a multi-site cluster need to all be on the same vlan/ip subnet range but wanted to check if this is a legit configuration from real users in the community. Any pro's and con's? We currently have two independent search heads but are going to a search head cluster later this year.
thank you
... View more
02-07-2020
08:50 AM
1 Karma
received response from support
Thanks for contacting Splunk Support. This is known issue with the logging. The message is benign as it deals with how we log search operations during parsing and wildcard substitution phase of the search.
There is a bug SPL-170703 open in regards to have better logging in regards to this.
... View more
12-16-2019
09:06 AM
Ah, this is getting ugly. I'll have to try to find all the changes to our datamodels over the past 3 yrs then remove the local .json files, and add those changes back in the gui on the newer CIM version. Would cloning the default Splunk datamodels for new custom datamodels when changes are required help? I guess I'll still be stuck with using old .json schemas as I upgrade in the future. Looks like its paramount to track any change to the datamodels via the gui going forward. I'm not digging this at all. thanks for the help.
... View more
12-16-2019
07:08 AM
yes, we also upgraded Enterprise Security from 5.0.1 to 5.3.1
... View more
12-13-2019
04:57 PM
I need to determine the significance of these errors before giving the green light to upgrade production. These are all related to after upgrading enterprise from 7.0.5 to 7.3.3 and Splunk Enterprise security from 5.0.1 to 5.3.1.
ERROR 2019-12-10 10:01:52.639 security SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T http_intel * where * * * '.
ERROR 2019-12-10 10:01:52.649 security SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T email_intel * where * * '.
ERROR 2019-12-10 10:01:52.670 security SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T service_intel * where * '.
ERROR 2019-12-10 10:01:52.679 security SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T registry_intel * where * '.
ERROR 2019-12-10 10:01:52.689 security SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T process_intel * where * *'.
ERROR 2019-12-10 10:01:52.708 security01-dev SearchParser Missing a search command before ''. Error at position '2' of search query '| * inputlookup append=T user_intel * where * * * '.
ERROR 2019-12-10 10:01:52.718 security01-dev SearchParser Missing a search command before '*'. Error at position '2' of search query '| * inputlookup append=T certificate_intel * where'.
If I run a grep on all files with these variable such as http_intel, email_intel, service_intel, the results show them in ESS DA dashboards. We never saw this error prior to upgrading to 7.3.3
Below is a sample, all others are about identical.
ERROR 2019-12-10 04:01:39.131 security SearchParser Missing a search command before '*'. Error at position '2' of search query '| * inputlookup append=T http_intel * where * * *
grep on http_intel found the below dashboard search
DA-ESS-ThreatIntelligence/default/data/ui/views/threat_artifacts.xml:432: | $tab_threat$ inputlookup append=T process_intel $max$ where * $network_filter$ $file_filter$ $registry_filter$ $service_filter$ $user_filter$ $process_filter$ $certificate_filter$ $email_filter$ | set_threat_collection_name("process_intel") | eval ip=mvappend(src, dest), domain=mvappend(src, dest) | inputlookup append=T certificate_intel $max$ where * $network_filter$ $file_filter$ $registry_filter$ $service_filter$ $user_filter$ $process_filter$ $certificate_filter$ $email_filter$ | set_threat_collection_name("certificate_intel") | mvexpand certificate_serial | get_certificate_serial | eventstats values(certificate_serial) as certificate_serial,values(certificate_serial_clean) as certificate_serial_clean,values(certificate_serial_dec) as certificate_serial_dec by _key | dedup _key,threat_collection | inputlookup append=T email_intel $max$ where * $network_filter$ $file_filter$ $registry_filter$ $service_filter$ $user_filter$ $process_filter$ $certificate_filter$ $email_filter$ | set_threat_collection_name("email_intel") | inputlookup append=T ip_intel $max$ where * $network_filter$ $file_filter$ $registry_filter$ $service_filter$ $user_filter$ $process_filter$ $certificate_filter$ $email_filter$ | set_threat_collection_name("ip_intel") | inputlookup append=T http_intel $max$ where * $network_filter$ $file_filter$ $registry_filter$ $service_filter$ $user_filter$ $process_filter$ $certificate_filter$ $email_filter$ | fillnull value=0 updated,disabled | set_threat_collection_name("http_intel") | get_threat_attribution(threat_key) | search $threat_id_filter$ | eval ip=coalesce(embedded_ip,ip), domain=coalesce(embedded_domain,domain), file_hash=coalesce(certificate_file_hash,file_hash), src_user=coalesce(certificate_subject_email,src_user), actual_src_user=coalesce(certificate_issuer_email,actual_src_user), file_name=coalesce(process_file_name,file_name), file_path=coalesce(process_file_path,file_path) | mvappend_field(url, http_referrer) | fillnull value="" threat_collection, source_type, threat_group, threat_category, malware_alias| stats dc(ip) as ip_count, dc(domain) as domain_count, dc(url) as url_count, dc(http_user_agent) as http_user_agent_count, dc(header) as header_count by threat_collection, source_type, threat_group, threat_category, malware_alias | addtotals fieldname=http_count http_user_agent_count, header_count | addtotals fieldname=total ip_count, domain_count, url_count, http_count | where total > 0 | fields threat_collection, source_type, ip_count, domain_count, url_count, http_count, total, threat_group, threat_category, malware_alias | sort - total
... View more
12-13-2019
04:50 PM
After upgrading to 7.3.3 from 7.0.5 these two log ERRORs are new
ERROR 2019-12-10 08:01:19.755 security TsidxStats Missing search clause after 'WHERE' keyword 1
ERROR 2019-12-10 08:01:46.309 security TsidxStats Wildcards (*) are not supported in aggregate fields 1
I found a similar log message where it mentions this is a bug.
https://answers.splunk.com/answers/593866/how-to-resolve-this-error-error-in-tsidxstats-wher-1.html
Has anyone seen these two log messages? I'm trying to gauge the significance before upgrading our production environment.
... View more
12-13-2019
04:39 PM
1 Karma
Upgraded from 7.0.5 to 7.3.3 and noticed splunkd Datamodel log ERRORs for removed macros
ERROR DataModelObject Failed to parse baseSearch. err=Error in 'SearchParser': The search specifies a macro 'search_activity' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information., object=Search_Activity, baseSearch= search_activity
Support recommended just delete the .json file but how do I know I'm not breaking anything? So...this brought up the below problem
Problem: The macro 'search_activity' has been removed in 7.3.3 yet the datamodel schema .json files in the local/data/models directory still references this macro and the splunkd logs are showing error messages the macro no longer exists.
/opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/Splunk_Audit.json
"calculations": [],
"constraints": [],
"lineage": "Search_Activity", <<does not exist in default CIM 4.13
"baseSearch": " search_activity " <<does not exist in default CIM 4.13
another flavor of this problem is the change of .json files (datamodel schema) between CIM versions. I have the local/Authentication.json from 4.11 below that is totally different from 4.13's default Authentication.json
4.11 below local/
/opt/splunk/etc/apps/Splunk_SA_CIM/local/data/models/Authentication.json | less
{
"modelName": "Authentication",
"displayName": "Authentication",
"description": "Authentication Data Model",
"objectSummary": {
"Event-Based": 10,
"Transaction-Based": 0,
"Search-Based": 0
},
"objects": [
{
"objectName": "Authentication",
vs
4.13 default/Authentication.json
-bash-4.2$ cat /opt/splunk/etc/apps/Splunk_SA_CIM/default/data/models/Authentication.json | less
{
"modelName": "Authentication",
"displayName": "Authentication",
"description": "Authentication Data Model",
"editable": false,
"objects": [
{
"comment": {
"tags": [
"authentication"
]
},
"objectName": "Authentication",
Question: How are customers supposed to upgrade to the new CIM versions and use the new default/.json files if local is overriding those changes? I do not see anywhere how these local json files were created in the first place by us users. Do we reverse engineer how these local/.jsons were created, delete the local/*.json files and try to create them again in the gui on the newer CIM version? This is not clear at all and I would appreciate any guidance for best practices. I have no idea what will break if I just remove all my local *.json files. I did not see this issue mentioned anywhere in the upgrade documentation. I assume this is common. I did look to see what changes are done to files in the Managed Apps, CIM->Setup and here we changed tags and indexes which touches the local macros.conf and datamodels.conf and the Settings->Datamodels->edit acceleration touches the local datamodels.conf. I see nothing that creates the local .json files.
thank you
... View more
12-13-2019
10:48 AM
First let me say thank you for your assistance on this issue.
Perhaps I didnt express my problem correctly. I'd like to run the macro and have the output the same as it used to be with \datamodel searches, that is both datamodel fields and splunk parsed fields on the left hand column of returned events. I also moved your macro to Splunk_SA_CIM/local/macros.conf as a test but same results as before when it was in its own app.
provides the below search that ran in verbose mode
[| datamodel Change_Analysis All_Changes
| table *
| spath path=constraints{}.search output=search
| mvexpand search
| format "(" "(" "" ")" "AND" ")"
| rex field=search mode=sed "s/\\\"/::::/g s/\"//g s/::::/\"/g"
| rename COMMENT1of2 AS "The rest of the code expands the macro because otherwise we get this error:"
| rename COMMENT2of2 AS "Error in 'SearchParser': The search specifies a macro 'cim_DataModelNameHere_indexes' that cannot be found"
| rex field=search "[^ ]+ (? [^ ]+)"
| map search="|makeresults | eval macro_definition=[ |rest /servicesNS/-/Splunk_SA_CIM/admin/macros splunk_server=local | search title=$macro_name$
| rex field=definition mode=sed \"s/\\\"/\\\\\\\"/g s/^/\\\"/ s/$/\\\"/\"
| eval definition=if(len(definition)>=5, definition, \"(index=*)\")
| return $definition ]
| eval search = replace(\"$search$\", \" $macro_name$`\", \" \" . macro_definition . \" \")
| table search"
| rename search AS search] AND sourcetype="pulse:connectsecure"
put below on single line to save space
SELECTED,FIELDS,host,1,source,1,sourcetype,1,user,1,
INTERESTING,FIELDS,app,1,changed_from,1,changed_to,1,#date_hour,1,#date_mday,1,#date_minute,2,date_month,1,#date_second,1,date_wday,1,#date_year,1,date_zone,1,dest,1,dest_is_expected,1,dest_pci_domain,1,dest_requires_av,1,dest_should_timesync,1,dest_should_update,1,direction,1,eventtype,1,fw,1,id,1,index,1,#linecount,1,message,3,msg,3,msg_id,3,#pri,1,protocol,1,punct,1,realm,1,result,3,result_id,3,role,1,roles,1,splunk_server,2,tag,2,tag::eventtype,2,time,2,#timeendpos,1,#timestartpos,1,type,1,user_watchlist,1,vendor_product,1,vpn,1
however, i'm not getting the actual datamodel fields with the above macro such as when I just run the search as
|datamodel Change_Analysis All_Changes search
| search sourcetype=pulse:connectsecure
| kv
I get the datamodel fields and not the splunk parsed fields like the above.
SELECTED,FIELDS,host,1,source,1,sourcetype,1,user,1,
INTERESTING,FIELDS,All_Changes.action,1,All_Changes.change_type,1,All_Changes.command,1,All_Changes.dest,1,All_Changes.dvc,1,#All_Changes.is_Account_Management,1,#All_Changes.is_Auditing_Changes,1,#All_Changes.is_Endpoint_Changes,1,#All_Changes.is_Network_Changes,1,#All_Changes.is_not_Account_Management,1,#All_Changes.is_not_Auditing_Changes,1,#All_Changes.is_not_Endpoint_Changes,1,#All_Changes.is_not_Network_Changes,1,All_Changes.object,1,All_Changes.object_attrs,1,All_Changes.object_category,1,All_Changes.object_id,1,All_Changes.object_path,1,All_Changes.result,3,All_Changes.result_id,3,All_Changes.src,1,All_Changes.status,1,All_Changes.tag,2,All_Changes.user,1,All_Changes.vendor_product,1,changed_from,1,changed_to,1,message,3,msg_id,3,realm,1,role,1,roles,1
... View more
12-13-2019
09:52 AM
My app for parsing the log events has global permissions (roles read, everyone, write admin) and apply roles to "all apps". i also confirmed with btool the stanza's for parsing those events work in both app/user context and global context. The app i created for your macro has also the same settings. I restarted splunk, tried debug/refresh also. I can attached screenshots of the parsed fields with your app and |datamodel search fields if that would help. I'm not sure what else i'm missing
... View more
12-13-2019
08:22 AM
thanks woodcock, I tried out your macro and it works but its not what i was expecting.
SIEMMacro_datamodelCIM(Change_Analysis, All_Changes) AND sourcetype="pulse:connectsecure"
Yes I now have results and the normal "Search and Reporting" Splunk fields are parsed out on the left hand side but I no longer have the datamodel fields parsed on the left hand side. So I assume it is no longer possible to have the datamodel fields and the normal splunk "Search and Reporting" co-mingled together in the same search results. I guess I'll just use the |datamodel search with just "sourcetype" and add |kv at the end (below) and I can still verify the datamodel is parsing the logs correctly which was my only use for this type of search anyway.
|datamodel Change_Analysis All_Changes search
| search sourcetype=pulse:connectsecure
| kv
I'm glad you replied as i thought there was a major bug going but now I know Splunk changed the code for optimization in 7.1.x and didnt warn users or if they did I missed it.
... View more
12-12-2019
07:59 AM
1 Karma
On 7.0.5 with our Search head using Enterprise Security we were able to run Search and Reporting searches, |tstats searches for our ESS correlation rules and | datamodel searches such as "|datamodel Authentication Failed_Authentication search | search index=os sourcetype=linux_secure" with no issues. I use the |datamodel searches to make sure the datamodel is picking up the fields in my logs before writing correlation rules. All worked until we upgraded to 7.3.2. Now normal search and reporting still works, |tstats searches for correlation rules still work but |datamodel searches do not find any events. It says to "No results found. Try expanding the time range." However, If I remove the "index=os" from the same datamodel search such as the the below
|datamodel Authentication Failed_Authentication search | search sourcetype=linux_secure
Splunk results return but the only fields parsed are those from the Authentication datamodel, all the other fields you would normally see under a "Search and Reporting" search are gone such as index, user etc. Issue has been brought to Splunk support but no comment. I was curious if anyone else has seen this issue. We did rebuild our datamodels but no difference.
... View more
10-10-2019
01:55 PM
What version of Stream is this exporter_ip broken and what version fixes it? Is there an SPL number?
... View more
07-16-2019
02:51 PM
We experienced the same results with half of our internal splunk certs expired. That is, all processes keep running, there were no TCP errors in logs, just that one log message Server certificate is now invalid. It expired on Sat xxxx. Traffic also still looks encrypted.
... View more
04-09-2019
03:24 PM
Nevermind, i found a duplicate on this https://answers.splunk.com/answers/523160/confusing-behaviour-of-fieldalias.html. In my case its not 100% only the first log message get the src for a failed password log but now I know fieldalias run in parallel and no dependencies allow.
... View more
04-09-2019
08:39 AM
PROBLEM: The field "src" is not parsed out for the "Failed password for invalid user" events but "src" is parsed out for the two PAM messages with rhost. If I do a failed login from a valid user account ie "Failed password for xxxxx" then "src" is parsed correctly but there are also no PAM messages with that event with any rhost fields so it seems to work correctly.
Below is a log sample for failed password for invalid user where the src is not parsed at all when PAM messages are also involved in the total login attempt.
Apr 9 14:43:48 test-backup sshd[16780]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.224.24 << src is parsed
Apr 9 14:43:48 test-backup sshd[16780]: Failed password for invalid user april9 from 192.168.224.24 port 36392 ssh2 <
... View more
03-08-2019
06:48 AM
I'm not sure why the app makers just don't change the name of the app to TA-Sudo so the regex for importing apps in ESS works right from the get go. They mention to change the regex for TA_ but perhaps that imports unwanted apps into ESS that you may not want imported. It just seems changing ESS regex is more dramatic than just changing the app name by the makers to work with ESS.
... View more
02-08-2019
02:56 PM
What you just said "Because the Indexer will index local files to itself" is my question. So where is setting to automatically index local files to itself? This was the part i was wondering about. So if it indexes local files to itself such as /var/log/messages from the Splunk_TA_nix where is this setting? Or do you just take it for granted?
Also, if the indexer is indexing its local files from any inputs.conf automatically, then if I run a search for these events in indexer host1 i see splunk_server showing different indexers host2 and host3. Does this mean the indexed data from host1 was replicated over to other indexers and the search just happened to use the data from host2 and host 3 instead of host1?
... View more
02-08-2019
02:29 PM
1) The key differentiator here is the host is an "indexer" itself. I am monitoring /var/log/* via inputs.conf of the splunk_ta_nix. There are no configuration settings in the indexer's outputs.conf referencing any auto-discovery for its index cluster. So how did /var/log/messages get indexed?
1) For indexers only, does setting an inputs.conf to monitor a file just magically get indexed locally with no outputs.conf file setting showing any destination?
2) For indexers only, does the indexer just know to use auto-discovery since its part of the cluster environment and will then magically look at its server.conf for the CM and get its list of indexers to forward to and perhaps including itself?
3) in my search results the indexer is host1 and the splunk_server was indexer host2 and indexer host3.
I'm still perplexed as to how /var/log/messages from an indexer running splunk_TA_nix is getting indexed.
... View more
02-08-2019
02:01 PM
I have Indexers in a cluster running Splunk_TA_nix. I'm monitoring /var/log in inputs.conf. I can see the log events from the search head with a splunk_server from a different Indexer in the cluster. Two questions
1) How did the /var/log/messages, as an example, get indexed? Did it get indexed locally, and if so, how did it know to do that? Or did the events get forwarded to other indexers in the cluster like how our heavy forwarders use Indexer-discovery by contacting the Cluster master for the list of indexers? I ask because I do not see any outputs.conf being configured on Indexers showing any auto-discovery. The cluster master settings are only in the server.conf file. I do not see how these local OS logs are being indexed and it's bothering me.
2) Can I assume the search results showing the /var/log/messages from host1 being seen in the results as splunk_server=host2 is due to replication or is it from host1 forwarding to host2 for indexing?
thanks
... View more
02-06-2019
01:50 PM
I assume you still need the Splunk_TA_nix on your HF running syslog-ng, indexers for UF's running on linux hosts as these have the props and transforms for these linux logs and the Splunk app for unix and linux is for the SH for visuals. So for the linux secure the requirements are "Splunk app for unix and linux" and "linux_secure" on the SH's and Splunk_TA_nix on Indexers and HF's and I guess UF's too. Is this true?
... View more
02-01-2019
02:50 PM
When they say remove Splunk_TA_Nix from the SH before installing, does that requirement also mean remove the Splunk_TA_nix from all indexers, HF's and d/s? Also can disabling the app be sufficient or does the app directory need to be totally removed? I want to just test this out first before removing TA_nix entirely
... View more
01-25-2019
03:59 PM
note: the<<<<>>>> should have said the below output is not seen during the time of the problem, there is no log percent=100, that line is just blank
... View more
01-25-2019
03:57 PM
Running syslog-ng with a HF. Logrotate runs hourly. 16 or so different web proxies are sending logs to the syslog-ng server with the HF. Sometimes 1 out of the 16 proxy log sources are no longer getting read by the HF even though the proxy log file exists in syslog-ng and can be read. At the top of the hour it fixes itself and the HF reads the file but i'm out of logs for an hour for correlation rules. DEBUG was enabled and below shows the log right at the time of the last log event seen in Splunk
01-25-2019 22:00:05.997 +0000 DEBUG TailingProcessor - Defering file=/var/syslog/proxy/192.168.251.141/proxy.log unsafe as it does not exist anymore, scheduling a oneshot timeout instead.
./splunk list inputstatus | grep -A4 proxy | grep -A4 192.168.251.141
/var/syslog/proxy/192.168.251.141
parent = /var/syslog/proxy//.log
type = directory
<<<<<<< >>>>>>>
/var/syslog/proxy/192.168.251.141/proxy.log
file position = 30610690
file size = 30610690
parent = /var/syslog/proxy//.log
percent = 100.00
I read logrotate should not be used with syslog-ng. Anyone ever see this message?
... View more