I downloaded the Splunk-Windows-64.zip
There is no install file, no setup file. Nothing I can find to install the program with. Did I miss something or did you guys intentionally leave that out?
Sorry for the bad translation. I have a Cloud client. The license is 50GB by day Additional DDAA has been contracted about what is not very clear to me, the shared documentation seems to be outdat...
See more...
Sorry for the bad translation. I have a Cloud client. The license is 50GB by day Additional DDAA has been contracted about what is not very clear to me, the shared documentation seems to be outdated or not available. https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/User/DataArchiver https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Service/SplunkCloudservice#Storage https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Service/SplunkCloudservice#Search When I go to "Settings" - "Indexes" I can see the indexes used by this client and the others that are internal to splunk from what I see. I see that one of the indexes has already reached the maximum size of 500GB and I don't know if it has the DDAA active. According to this image I understand that the DDAA is active? I must do something? I am worried if information is being lost since the client needs to retain that data for a long time
Hi All, I need help in staring to setup the Splunk connect for syslog (SC4S), I am not sure how to start and what procedure and documentation to follow. I am using splunk cloud 8.2.1.
Hi all need help getting the trailing number from a field in a search.
Examples of the field id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-423447 id = bdf73ad5-4499-4f70-b7e3-e2c81a...
See more...
Hi all need help getting the trailing number from a field in a search.
Examples of the field id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-423447 id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-6672 id = bdf73ad5-4499-4f70-b7e3-e2c81ae868c3-default-asset-4232323 I was using....
| eval stripped_asset_id=substr(id, -6)
however that only is consistent if the last numbers consist of 6 digits which it often may have more or less. How can I take everything after the last dash "-"?
Is it possible to ingest data related specifically from Microsoft Defender Safe Links? We have tried both Microsoft 365 Defender Add-on for Splunk and the Splunk Add-on for Microsoft Security withou...
See more...
Is it possible to ingest data related specifically from Microsoft Defender Safe Links? We have tried both Microsoft 365 Defender Add-on for Splunk and the Splunk Add-on for Microsoft Security without success. Appears that both of these collect data from Incidents and alerts.
Any help appreciated.
Is there a way to create a report using metadata or any other data to list all the fields that are available by index and sourcetype. Example
Just need to get a index, sourcetype an...
See more...
Is there a way to create a report using metadata or any other data to list all the fields that are available by index and sourcetype. Example
Just need to get a index, sourcetype and all available fields under them listed out as report.
I am trying to build an Splunk addon via there API. I have 1800 input entries that are set poll every 24 hours. the problem I'm seeing is that I get at http 429 error from the API destination. Is the...
See more...
I am trying to build an Splunk addon via there API. I have 1800 input entries that are set poll every 24 hours. the problem I'm seeing is that I get at http 429 error from the API destination. Is there a way to tell Splunk to only run a single API at a time to not overload the destination server?
After upgrading the Splunk Add-on for Microsoft Office 365 to version 3.0.0 it is required that we disable ServiceHealth.Read.All in Office 365 Management APIs, and enable ServiceHealth.Read.All in M...
See more...
After upgrading the Splunk Add-on for Microsoft Office 365 to version 3.0.0 it is required that we disable ServiceHealth.Read.All in Office 365 Management APIs, and enable ServiceHealth.Read.All in Microsoft Graph as per the app doc. After following the instruction and assigning the delegated type to ServiceHealth.Read.All under the Microsoft Graph , I'm getting the below error in the logs: level=ERROR pid=23448 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api.GraphApiConsumer pos=GraphApiConsumer.py:run:74 | datainput=b'ServiceUpdateMessages' start_time=1651772811 | message="Error retrieving Graph API Messages." exception='NoneType' object is not iterable The inputs under Office 365 Management APIs are working fine, which indicates that the configuration data like client id and secret are correct. Can someone please let me know what might be causing this issue?
I have a field extraction I've created that replaces a couple of previous extractions I deleted. However I have a couple of reports that still reference the deleted extractions when I view the avail...
See more...
I have a field extraction I've created that replaces a couple of previous extractions I deleted. However I have a couple of reports that still reference the deleted extractions when I view the available fields in the events. I've tried re-creating the report and still get the same behavior. I will also mention if I change the evtid in the query below to another possible value, I get available fields I expect to see. Any ideas what might be going on? The extracted field is vmax_message. vmax_host is also an extracted field and works just fine.
index=vmax_syslog sourcetype=vmax:syslog fmt=evt vmax_host=*san* evtid=5200 sev="warning"
| eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S")
| chart values(symid) AS symid values(vmax_message) AS message values(sev) AS severity values(Time) as Time by vmax_host
Hi Splunkers, My greetings !! !! I have data coming from the Syslog server for which sourcetype Is "syslog", now, I have split the data going to three diff indexers in transfroms.conf using MetaDat...
See more...
Hi Splunkers, My greetings !! !! I have data coming from the Syslog server for which sourcetype Is "syslog", now, I have split the data going to three diff indexers in transfroms.conf using MetaData:Index and using the regular expression like (abc* | xyz* ), and it is working fine. Now, I need to hardcode the sourcetype for each of the data going to the different index, now the sourcetype is coming as "syslog" but I want for every separate index I need to have separate sourcetype name . Can you plz help !!
What Splunk enterprise version could I use to capture all the logs to include Windows XP, 7 and server 2008 and Solaris 9? Currently have Splunk 6.5.3.
My webhook endpoint needs to retrieve the results of the alert that was triggered. Am I correct in thinking that the payload's "sid" value is the same as the Enterprise REST API's {search_id} value i...
See more...
My webhook endpoint needs to retrieve the results of the alert that was triggered. Am I correct in thinking that the payload's "sid" value is the same as the Enterprise REST API's {search_id} value in the search/jobs/{search_id}/results endpoint?
I'm a little surprised the webhook docs don't say anything about this since it seems like the logical next step. Normally I'd just try it myself, but we're in a gigantic corporate environment, there's tons of paperwork to get permission to do anything, etc. etc. -- much faster to just ask if I'm on the right track. And, I guess, the other obvious question is, if I'm not on the right track, how do I retrieve the search results based on a webhook payload?
Thanks in advance!
Hi Team,
We want to do the IP allowlist in Crowdstrike. So we want to know the ip address range in Splunk to communicate with Crowdstrike through Falcon Event Streams Add-on
I'm in the middle of doing historical data migration form on-prem indexers to S3 in Splunk Cloud. Some of the data is making it through, but I'm getting a ton of these type messages in splunkd.log...
See more...
I'm in the middle of doing historical data migration form on-prem indexers to S3 in Splunk Cloud. Some of the data is making it through, but I'm getting a ton of these type messages in splunkd.log on the on-prem indexers: WARN S3Client - Error getting object name = <...GUID/receipt.json(0,-1,) to localPath = /opt/splunk/var/run/splunk/cachemanager/receipt-(some numbers.json>
Hello All, I have faced interesting issue. I have an ingest time extraction. [extract] REGEX = ^([^\r\n]+)$ FORMAT = message::$1 DEST_KEY = _raw Truncation not the case, I set it to zero and ...
See more...
Hello All, I have faced interesting issue. I have an ingest time extraction. [extract] REGEX = ^([^\r\n]+)$ FORMAT = message::$1 DEST_KEY = _raw Truncation not the case, I set it to zero and the whole event is not longer than 5000 characters BUT the message field is truncated exactly after 4023 characters. Where it is written that an extracted field cannot be longer than this amount of characters. Thanks.
Hello Splunkers! I'm pretty new with Splunk and I retrieve an old splunk project that i didn't set up at all. I'm trying to train myself on it, but... I have some problems i couldn't solve alone. I...
See more...
Hello Splunkers! I'm pretty new with Splunk and I retrieve an old splunk project that i didn't set up at all. I'm trying to train myself on it, but... I have some problems i couldn't solve alone. I have one Search Head, one Indexer and between 3 and 5 forwarders depending on my need. Here is the VM of my indexer : Almost all logs that I collected went in /dev/vda1, which is not suppose to be the case. I've override the default storage location , but i guess it doesn't matter ... /opt/splunk/etc/system/local/indexes.conf : [main] homePath = /mnt/data/$_index_name/db I assume it's the reason why i stillm got those messages : Please let me know if I did something wrong or if i missed something, Thanks in advance for your help! Regards , Antoine
Hi,
Am quite new to splunk, and coming from Elasticsearch, so my knowledge is biased. However I did notice that Elastic performs faster on large datasets. I think 1 of the main reasons is the on-th...
See more...
Hi,
Am quite new to splunk, and coming from Elasticsearch, so my knowledge is biased. However I did notice that Elastic performs faster on large datasets. I think 1 of the main reasons is the on-the-fly field extractions splunk performs when searching.
hence we created a source_type for ingesttime fieldextraction. Now I would expect these field would always be available, even when choosing for fast-mode. However, this seems not to be that way.
So my questions: (How) are fields stored in splunk in an index when extracted during ingest?
Can I tell splunk to NOT extract extra fields for a certain index when in fast_mode or smart mode, but just show the fileds extracted during the ingest? Thnx
Hello everyone,
I am currently working on the integration of Citrix Netscaler to Splunk. I`d like to see the App-/Netflow data in Splunk to use those for traffic balancing.
My setup is as follows...
See more...
Hello everyone,
I am currently working on the integration of Citrix Netscaler to Splunk. I`d like to see the App-/Netflow data in Splunk to use those for traffic balancing.
My setup is as follows:
Splunk v8.2.4
Splunk App for Stream v 8.0.2 (and the TAs as well)
Splunk Add-on for Citrix NetScaler v8.1.1
I was following the docs and installed as described. The files from TA Citrix are copied to stream app (https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/ConfigureIPFIXinputs).
Eventhough - the netflow elements appear, they are not getting decoded and I am seeing this:
Following IANA i was able to figure out that "5951" is ID of manufacturer: https://www.iana.org/assignments/enterprise-numbers/enterprise-numbers (which is Netscaler in this case). Unfortunately i did not find any documentation on the decoding procedure for those bytes.
While trying to understand what the streamfwd binary does and how the solution is embedded into the python scripts, I stumbled over one interessting fact. in $SPLUNK_HOME/etc/apps/splunk_app_stream/bin/splunk_app_stream/models/vocabulary.py there is a refernce to this URL: "http://purl.org/cloudmeter/config" which seems to be involved into decoding somehow. However when i try to open this, it shows 404.
Coming back to the original issue: those Appflows are not decoded. Is there a known solution for this? If not, does anyone know, where those element definitions may be found?
Many thanks in advance!
Best
Stan
PS: Seems to be related to https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-decode-netflow-elements-Key-Values-pair/m-p/595345