That doesn't seem right. Those fields should not be empty so something must be overwriting it in search-time (or your indexes are damaged but let's assume they aren't). Try | tstats count where ind...
See more...
That doesn't seem right. Those fields should not be empty so something must be overwriting it in search-time (or your indexes are damaged but let's assume they aren't). Try | tstats count where index=dfini by source sourcetype host That should show you what are the indexed fields. You have to search your search-time definitions to see what overwrites those values.
Hi All,
I am trying to create a scatter dashboard or similar in Dashboard Studio to show debit transaction amounts over time.
A query like this works well in Search, but translates poorly to the ...
See more...
Hi All,
I am trying to create a scatter dashboard or similar in Dashboard Studio to show debit transaction amounts over time.
A query like this works well in Search, but translates poorly to the dashboard:
source="Transaction File.csv" "Debit Amount"="*" | stats values("Debit Amount") BY "Posted Transactions Date"
I am aware I likely need to convert the the date from string format to date format within my search, something to the effect of: |eval Date = strptime("Posted Transactions Date","%d/%m/%y")
But I am struggling to get the final result.
I have also played around with using the _time field instead of Posted Transaction Date field and with timecharts without success which I think is likely also a formatting issue.
Eg:
source=source="Transaction File.csv" | timechart values("Debit Amount")
As there are multiple debit amount values per day in some cases, I would ideally like a 2nd similar dashboard that sums these debits per day instead of showing them as individual values whilst also removing 1 outlier debit amount value of 7000.
Struggling a bit with the required search(s) to get my desired dashboard results.
Any help would be appreciated, thank you!
Exchange is a relatively big solution so depending on what you want to ingest the answer can vary. If you want just the message tracking logs, you can easily ingest them using monitor input. I've ne...
See more...
Exchange is a relatively big solution so depending on what you want to ingest the answer can vary. If you want just the message tracking logs, you can easily ingest them using monitor input. I've never dealt with SMTP or OWA logs so can't tell you how these work but I suppose they should also be relatively easily readable. The problem might be in parsing the data of course. QRadar is simply different so don't bring it for comparison.
You didn't say what have you tried so far. Maybe you have some small easily fixable mistake in your configs or maybe your approach is completely wrong. Show us what you've got.
I'm not sure if any already defined sourcetypes come with the add-on. The description says it provides transforms and some example props.conf but you have to see into it for yourself. Anyway, if kv i...
See more...
I'm not sure if any already defined sourcetypes come with the add-on. The description says it provides transforms and some example props.conf but you have to see into it for yourself. Anyway, if kv is to do its job, it must know what sourcetype it's dealing with. With makeresults you haven't provided any. Anyway, this add-on seems strange. It says it extracts some fields in index time. Unless it's the time, it seems like an overkill.
Hi @pavithra adding your information to the below inputs.conf: [monitor://C:\Users\_svcAPACCommVault01\OneDrive - Lendlease\Desktop\csv\*.csv]
disabled = 0
sourcetype = backup
index = acn_lendleas...
See more...
Hi @pavithra adding your information to the below inputs.conf: [monitor://C:\Users\_svcAPACCommVault01\OneDrive - Lendlease\Desktop\csv\*.csv]
disabled = 0
sourcetype = backup
index = acn_lendlease_commvault
host = your_host
Ciao. Giuseppe
1. The thread is relatively old so you might not get many responses. It's usually better to post a new question (linking to old thread for reference) than to dig up a several-years-old thread. 2. Th...
See more...
1. The thread is relatively old so you might not get many responses. It's usually better to post a new question (linking to old thread for reference) than to dig up a several-years-old thread. 2. The .xsl file is meant to be applied on the Vault side - installed to ENE, not used on Splunk's side. https://docs.splunk.com/Documentation/AddOns/released/CyberArk/Setup
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is ...
See more...
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is to configure the Universal Forwarder to collect these logs and forward them to a central Splunk server. Given that the Splunk documentation indicates that the MS Exchange App is end-of-life (EOL), is it necessary to use an add-on? The documentation suggests creating GPO policies and making other changes. However, in IBM QRadar, the process is simpler: you install the WinCollect agent, specify the paths for MSG Tracking, SMTP, and OWA logs, and the agent collects and forwards the logs to the QRadar Console. The Auto Discovery feature in QRadar then creates the log source automatically. Is there a simpler and more straightforward method to collect these logs using the Splunk Universal Forwarder? Thank you in advance for your assistance.
HF on its own does not have any "HA" or "clustering" feature. There are possibilities to manually craft some active-passive solutions but they require quite a lot of "external architecting" - doing a...
See more...
HF on its own does not have any "HA" or "clustering" feature. There are possibilities to manually craft some active-passive solutions but they require quite a lot of "external architecting" - doing a lot of stuff semi-manually on the OS level. I think there was a .conf presentation about it a few years ago but I can't find it.
1. This is not a Professional Support service. People do have their lives and respond when they have some spare time. 2. For typical apache httpd logs there are two built-in sourcetypes - access_com...
See more...
1. This is not a Professional Support service. People do have their lives and respond when they have some spare time. 2. For typical apache httpd logs there are two built-in sourcetypes - access_combined and apache_error.
Dashboard studio does not allow for custom JS and does not have any in-built capability to do it on its own. You could try to do something like this using SimpleXML dashboard and some custom JS scrip...
See more...
Dashboard studio does not allow for custom JS and does not have any in-built capability to do it on its own. You could try to do something like this using SimpleXML dashboard and some custom JS scripting on your side.
Forwarders send requests to the Cluster Manager for a list of available indexers. The requests are sent at adjustable intervals. See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/index...
See more...
Forwarders send requests to the Cluster Manager for a list of available indexers. The requests are sent at adjustable intervals. See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/indexerdiscovery#How_indexer_discovery_works for details.
You can compose your own download link based on the repo's contents and the existing Solaris link. But the general rule for obtaining old versions is "call the support for it".