Activity Feed
- Posted Re: Enterprise Security in hybrid (cloud + on premise) scenario on Splunk Enterprise Security. 01-12-2021 12:11 AM
- Got Karma for Re: TA-webtools - CURL command error - schema not specified. 07-27-2020 08:12 AM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-27-2020 05:58 AM
- Karma Re: TA-webtools - CURL command error - schema not specified for jkat54. 07-27-2020 05:58 AM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-27-2020 05:44 AM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-23-2020 09:08 PM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-23-2020 04:38 AM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-23-2020 03:32 AM
- Posted Re: TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-22-2020 11:29 PM
- Posted TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-22-2020 10:26 PM
- Tagged TA-webtools - CURL command error - schema not specified on All Apps and Add-ons. 07-22-2020 10:26 PM
- Karma Re: After Updating the Add-on for Windows receive error "Could not load lookup=LOOKUP-app4_for_windows_security" for napomokoetle. 06-05-2020 12:50 AM
- Karma Re: Why am I receiving the following Collections.conf and Drilldown_settings warning messages after installing the Alert Manager App? for napomokoetle. 06-05-2020 12:50 AM
- Got Karma for Tableau System Logs. 06-05-2020 12:50 AM
- Got Karma for Linux Universal Forwarder - Security Recommendations. 06-05-2020 12:50 AM
- Karma Re: Why is the Windows Event Collector not parsing correctly? for chje. 06-05-2020 12:49 AM
- Karma Re: Why is the Windows Event Collector not parsing correctly? for FrankVl. 06-05-2020 12:49 AM
- Karma Re: What is the difference between Splunk and ELK Stack Elasticsearch in terms of Security, Infrastructure, deployment etc? for outcoldman. 06-05-2020 12:49 AM
- Got Karma for Any Updates to this App planned?. 06-05-2020 12:49 AM
- Got Karma for Any Updates to this App planned?. 06-05-2020 12:49 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 | |||
0 | |||
0 | |||
1 | |||
2 | |||
0 | |||
0 | |||
0 | |||
0 |
01-12-2021
12:11 AM
Stumbled about this question - I think it is answered here https://docs.splunk.com/Documentation/SplunkCloud/8.1.2011/User/SearchCloudfromEnterprise
... View more
07-27-2020
05:58 AM
1 Karma
OK. Maybe it would be helpful to upload the source to github for more contributions. I would try to extend the documentation.
... View more
07-27-2020
05:44 AM
I got one example running by removing ticks and quotes from the uri= field. Still trying to get the datafield working - is the datafield json or params or both?
... View more
07-23-2020
09:08 PM
No - i am on a Windows machine (using Chrome) and the Splunk Server is on Linux. Really strange - I get a html response thats something is odd with my authentication. I rechecked the api key with the CURL on CLI.
... View more
07-23-2020
04:38 AM
I previsouly tested the params with the data field, with no success. So I tried the request with the linux CURL in putty, to make sure the request is working at all, with the result at the CLI it is working with the app not. I dont now how to extend testing / debugging. Is there any additional logging?
... View more
07-23-2020
03:32 AM
Hello, ok, but in both examples I try to curl the uri directly (without payload), which is not working as expected.
... View more
- Tags:
- h
07-22-2020
11:29 PM
Another example with virustotal: works on CLI: curl -X GET 'http://www.virustotal.com/vtapi/v2/domain/report?domain=tines.io&apikey=122555' works NOT with webtools: | curl method=get uri="http://www.virustotal.com/vtapi/v2/domain/report?domain=tines.io&apikey=123456" debug=true Maybe I need some more examples in the documentation 🙂
... View more
07-22-2020
10:26 PM
Hello community and @jkat54 , I am currently testing your fancy webtools App. It looks very promising, but i am running in an error I don’t understand. Example: (Notice, the csv simple gets me the id - i could also doe eval team_id=„12“) index=test source=„NHL-Teams.csv“ Team=*Colorado* | eval team_id=ID | url_string= "https://statsapi.web.nhl.com/api/v1/teams/“.team_id | curl uri=url_string method=get debug=true | table curl* gets me an „curl uri schema not specified“ | curl uri="https://statsapi.web.nhl.com/api/v1/teams/12" method=get debug=true | table curl* is working as intended. I can only suggest that this kind of string concatenation for building a url is not supported, but I dont understand why 🙂 Or do you suggest to do it in a different way? Kind regards!
... View more
Labels
- Labels:
-
troubleshooting
03-24-2020
05:34 AM
https://docs.splunk.com/Documentation/WindowsAddOn/7.0.0/User/Lookups
... View more
03-17-2020
08:23 AM
Great Tip, can you also share what your inputs.conf is looking?
... View more
12-11-2019
03:04 AM
Is now also an issue for me after upgrading to Version 3.0.0.
... View more
09-05-2019
07:40 AM
1 Karma
Hello Splunk-Community,
for month we are discussing with our Linux admins, if it is ok to install Splunk Universal Forwarder on Linux (Red Hat) or not.
We just want to collect Tomcat / Apache logs from various Linux Hosts, and really don't know how.
The main concern is the management of the needed permissions (per Host / Application for about 1000 Linux Systems) to get the Forwarder to the needed application log directories. We don't want to run the Forwarder as root.
So what are you doing? Do you have any best practices?
I can't belive we are the only one facing this discussion.
Thank you
PS: As a side note, at Windows it seems to be ok to run the Forwarder as System Service.....
... View more
04-10-2019
02:51 AM
Thank you for your quick response.
Yes I think we have to separate it, but from a Dev perspective it is so much easyier to have access to the data.
A "read-only" Searh Head Mode would be perfect.
... View more
04-10-2019
02:33 AM
Hello Community,
we are in a setup process of a local development environmet (with pushing to a git server and so on).
One question arise:
If I setup my local Splunk installation as a Search Head, I can utitilize the production data for development.
But, how to prevent running (and executing) some crucial commands like collect, and creating form my "DEV-SearchHead" data to the production index.
Any hint is welcome.
Kind regards
... View more
- Tags:
- splunk-enterprise
02-20-2019
05:39 AM
Hello Team,
I try to setup the Wildfire API Report download.
Prerequesists are met, so API Key is setup, and we get Wildfire Logs through syslog.
While debugging I notice the following safedsearch is triggered:
search = pan_wildfire verdict="malicious" | panwildfirereport | table wildfire_report | rename wildfire_report AS _raw | collect index=main sourcetype=pan:wildfire_report
https://github.com/PaloAltoNetworks/SplunkforPaloAltoNetworks/blob/639568f065ce026e2554d4b9be04a85b2034f4a8/default/savedsearches.conf
I see two issues, pan_wildfire alias seems not to work without an index, and the script stores the result in the main index, which should be empty.
I am wondering if anybody get this working?
Python.log is shows no entries.
Kind regards
... View more
02-20-2019
05:33 AM
Hello,
you have to order an addition:
https://docs.paloaltonetworks.com/cloud-services/apps/log-forwarding/log-forwarding-app-getting-started/get-started-with-log-fowarding-app
to forward Cloud Logs to Splunk.
Kind regards
... View more
02-08-2019
04:17 AM
1 Karma
Hello everybody,
i am wondering if anybody already do some Tableau System Monitoring with the Logs Tableau provided?
I was a little bit suprised not to find a App or some inputs.conf recomendations.
As far as I can see, it should be Tomcat and Apache Logs - with some redis.
Any link / tip helps.
Thank you!
... View more
09-11-2018
01:11 AM
Bump, have the same question.
We try to chart bytes over duration for a dedicated start time.
So the bytes have to be uniformley distributed in a time chart from start time + duration (as end time).
... View more
02-27-2018
11:15 PM
2 Karma
Hello,
we are just starting the integration of SEP via Syslog, and notice that this TA seems not to work with all (new) Sourcetypes / Fields.
There is another "official" Version of the App, which requires a file based forwarding to Splunk (we prefere syslog!).
Is anybody successfully using this TA with latest SEP Version?
Kind regards
... View more
02-21-2018
05:06 AM
Newer SEP Versions allow sending data via Syslog.
Only the litte outdated Splunk App is not so nice.
... View more
01-17-2018
03:45 AM
Hello Splunkers,
we want to professionalize our app development / deployment in our Splunk environment.
We have a DEV, TEST System and a PROD cluster (only PROD is a cluster!)
What I am thinking about is to add every app on dev in a dedicated git repo to keep track of changes.
Checkout on Test for testing, and then deploy on PROD (SH, Indexer...).
Any hints / tips how to do a proper app development and deployment life-cycle (and what tools are involved?).
Thank you!!
... View more
02-16-2017
05:06 AM
Caution:
You have to completely remove the Rapid7 App (also from the file system). The Add-On will not work correctly if you have the App and the Add-On installed!
... View more
02-16-2017
05:00 AM
What I suggest to trace the error down:
recheck if the credentials (user / password) are working and have the correct permissions
reinstall the add-on, make sure no other rapid7 add-on / app is installed (we notice some problems with old installations, despite the fact the app was deactivated)
contact rapid7 splunk support: support@rapid7.com
Kind regards
... View more
02-13-2017
05:24 AM
I will get your feedback to the rapid7 support guys.
Maybe some kind of workaround is possible, e.g. exporting the results and using a universal forwarder to add the data to the indexer cluster, and disabling the cron job.
... View more
02-12-2017
11:21 PM
Hello,
how to setup this add-on in a cluster environment?
As far as I understand the add-on, there is a cron job triggered which gets the data from the nexpose.
So if i install the add-on on my indexer cluster, every indexer would start the cron job and get the data?!
We don't use any Heavy Forwarders at the moment to get the add-on running there.
So any advice how to setup the add-on on the indexer cluster?
Kind regards!
... View more