All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your return, You are right. The decomissionned indexer is now on state "Graceful shutdown" and buckets count is 0. Took 2.5 days to decomission 20 To of datas.  But SF / RF is still not... See more...
Thanks for your return, You are right. The decomissionned indexer is now on state "Graceful shutdown" and buckets count is 0. Took 2.5 days to decomission 20 To of datas.  But SF / RF is still not green. 3 SF tasks are still in pending, i tried to resync thems but no change.  Should I now do a rolling restart after removed my decomissionned indexer in order to get back my SF / RP ?  Or simply restart my CM splunk deamon ? An other intorragation, is it normal to only have default DataModels visible (and not all my Datamodels) from CM (Settings/DataModels)  ? Many thanks   
Here's an untested idea.  Install an HF on the server and use Splunk's Ingest Actions feature to write the data to S3.  It's not clear if the HF will be happy only writing to S3 or if it also will wa... See more...
Here's an untested idea.  Install an HF on the server and use Splunk's Ingest Actions feature to write the data to S3.  It's not clear if the HF will be happy only writing to S3 or if it also will want to send to an indexer. See https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/DataIngest#Heavy_forwarders_managed_through_a_deployment_server for details, including the need for a Deployment Server.
Do not restart the decommissioned indexer. If the indexer stopped running then it has finished its work and the server can be retired.  Consider restarting the CM to force it to rebuild the bucket t... See more...
Do not restart the decommissioned indexer. If the indexer stopped running then it has finished its work and the server can be retired.  Consider restarting the CM to force it to rebuild the bucket table.
Hi @williamcclark, The ldapsearch command attrs argument is similar to the Get-ADUser cmdlet Properties parameter; however, unlike Get-ADUser, ldapsearch does not return a default set of LDAP attrib... See more...
Hi @williamcclark, The ldapsearch command attrs argument is similar to the Get-ADUser cmdlet Properties parameter; however, unlike Get-ADUser, ldapsearch does not return a default set of LDAP attributes. Using ldapsearch without the attrs argument is equivalent to running Get-ADUser -Properties *. (Technically, the default value for attrs is the Python constant ldap3.ALL_ATTRIBUTES, which evaluates to *.) To limit the attributes returned, provide a comma-delimited list to the attrs argument: | ldapsearch attrs="sn,givenName,sAMAccountName" In the add-on code, "Invalid attributes types in attrs list" is returned when a requested attribute is not present in the directory schema. How are you using the ldapsearch command? Is it being used by another app or add-on? Does the use case expect a schema extension that isn't installed on your target directory? For example, are you searching for Exchange-related attributes in a directory that does not have the Exchange schema extensions installed?
Hi @bhagyashriyan, Any Google Cloud solution that allows you to submit HTTP requests, parse HTTP responses, and publish messages to a Google Cloud Pub/Sub topic can use the Splunk Cloud REST API, as... See more...
Hi @bhagyashriyan, Any Google Cloud solution that allows you to submit HTTP requests, parse HTTP responses, and publish messages to a Google Cloud Pub/Sub topic can use the Splunk Cloud REST API, assuming REST API access is enabled and granted to the source Google Cloud egress address(es). You can execute Splunk searches using the Splunk Cloud REST API search/jobs endpoint. Here's a simple Bash command-line example using curl, jq, and gcloud: curl -s -u username:password https://<deployment-name>.splunkcloud.com:8089/services/search/jobs -d search="| makeresults count=10" -d exec_mode=oneshot -d output_mode=json | jq -r '.results[] | tojson | @sh' | while IFS= read message; do gcloud pubsub topics publish mytopic --message=${message}; done Replace <deployment-name> with your Splunk Cloud stack name and mytopic with your Google Cloud Pub/Sub topic name. This example assumes gcloud is already correctly configured. You can also use Splunk Cloud access tokens instead of username/password authentication. See https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud  and https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTsearch#search.2Fjobs for more information. I don't work in Google Cloud day to day, so I recommend browsing the Google Cloud documentation for inspiration.
Here's an alternative using rex and eval that should accommodate chars that aren't XML entities: | rex field=name max_match=0 "(?<name>(?:&#[^;]+;|.))" | eval name=mvjoin(mvmap(name, if(match(name, ... See more...
Here's an alternative using rex and eval that should accommodate chars that aren't XML entities: | rex field=name max_match=0 "(?<name>(?:&#[^;]+;|.))" | eval name=mvjoin(mvmap(name, if(match(name, "^&#"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "[&#x;]", ""), 16), tonumber(replace(name, "[&#;]", ""), 10))), name)), "") If I were using any of these solutions myself, I'd choose spath. It should handle any valid XML without needing to handle edge cases in SPL.
Hi all, I'm actually have to decomission 6 indexers on a 9/9 multi site cluster of indexers. The command passed : splunk offline --enforce-counts 3 days have passed, and im still having a lar... See more...
Hi all, I'm actually have to decomission 6 indexers on a 9/9 multi site cluster of indexers. The command passed : splunk offline --enforce-counts 3 days have passed, and im still having a large amount of buckets for the offlined indexer. Buckets dont reduce... or a very little amount. The Indexer is still in "Decomissionning" status in the Cluster master (setting/indexer clustering) The RP/SF is KO. There is no more active tasks (all complete around 12 000 tasks performed and OK) exept for 4 tasks who are waiting the RF/SF back to OK. (pending) All the indexers of both site are communicating well ones with others. Does anybody have all ready encounter this problem ? I have checked errors messages (splunkd.log) in CM / Decomissionned indexer / and other indexers and I dont find any revealant messages or errors. Is it safe to launch a rolling restart ? Or to shoud I restart splunkd on the decommissionned indexer? Thanks for any help
@sidtalup27  I have the exact issue, there is nothing wrong with the port configuration on the vm and everything looks fine with NSG at Azure, but still facing issues with splunk web.   Were you ab... See more...
@sidtalup27  I have the exact issue, there is nothing wrong with the port configuration on the vm and everything looks fine with NSG at Azure, but still facing issues with splunk web.   Were you able to solve the issue you had?
That's another story. In order to keep the forums tidy, please create a separate thread for a new problem. Describe precisely what's going on and we'll see if we can help you.
Hi @Veerendra, the lookup isn't relevant, it depends on the fields you have. You have to adapt the code I sent to your lookup, nt the lookup to the code. Ciao. Giuseppe
Hi @gcusello  could you please send me the lookup file you are using  
Okey Thank you, and what is this _key that you have used?  
Hi @AL3Z , this means that the issue on the upload procedure is solved, now you have to debug your code to understand if there's something wrong or missing, e.g. an image or a JS. If you need help,... See more...
Hi @AL3Z , this means that the issue on the upload procedure is solved, now you have to debug your code to understand if there's something wrong or missing, e.g. an image or a JS. If you need help, you should share the dashboard code (only if it's in Classical Dashboard). Ciao. Giuseppe
@gcusello, Yes it is in the dashboard running, Do you want me to paste the source code of dashboard here ?
Hi @Veerendra , no, the solution to have all in one panel is to use a JS, this is a workaround that I created to avoid to use JS. Ciao. Giuseppe
@gcusello Thanks for the sample code, I would like to ask can we do the same in single panel. where we click the record and it will be updated in the same panel?
Hi @AL3Z , where is the issue: in the upload procedure or in dashboard running? if in upload procedure, the message should say what's the object with the issue. In in dashboard running, there's so... See more...
Hi @AL3Z , where is the issue: in the upload procedure or in dashboard running? if in upload procedure, the message should say what's the object with the issue. In in dashboard running, there's something wrong or missing in the dashboard. Ciao. Giuseppe
@gcusello @PickleRick , I have changed the permissions sucessfully but after installing it is throwing a new error  Something went wrong! Failed to load current state for selected entity in form! ... See more...
@gcusello @PickleRick , I have changed the permissions sucessfully but after installing it is throwing a new error  Something went wrong! Failed to load current state for selected entity in form! Details Error: Request failed with status code 500 ERR0005 How do we fix this issue any idea? Thanks in advance.
Index names don't matter here. It's about the data in indexes. Anyway, perfmon data does not include OS version as far as I remember so you need to make sure you have this ingested another way. Wha... See more...
Index names don't matter here. It's about the data in indexes. Anyway, perfmon data does not include OS version as far as I remember so you need to make sure you have this ingested another way. What data you have in your linux index is beyond me - you should have it docummented somewhere. I suppose you have TA_nix deployed across your environment and some inputs enabled but we don't know which ones and what data you're ingesting. So the question is what data you _have_. If you know this, you'll probably know what to search for yourself.
As @gcusello already pointed out, if working with _time it's usually (there are some use cases against it but they are rare) good do leave it as a unix timestamp throughout your whole search pipeline... See more...
As @gcusello already pointed out, if working with _time it's usually (there are some use cases against it but they are rare) good do leave it as a unix timestamp throughout your whole search pipeline and only render it to human-readable text at the end for presentation. (you can also use fieldformat to keep the data in machine-convenient form but present the time to the user as a formatted string - that's my preferred approach). The question is what kind of data you actually have and how your firewall reports traffic on an ongoing connection. Some firewalls (for example Juniper) give you an event on flow creation and on flow closing with just one value on session close giving you summarized traffic across the whole flow. Other firewalls can give you "keep-alive" events on already established sessions providing you with differential traffic updates (but some can also give you aggregated traffic over the whole session). So it's not that obvious how to query for that data. Also if you have your data normalized into CIM datamodel and your datamodel accelerated, you could use that datamodel to make your searches way way faster.