All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

drwx------- Splunk Splunk TA_Akamai_SIEM ... This is what is there for this app in DS and HF
For example with ls -laR /opt/splunk/etc/deployment-apps/whatever_TA  
Have you tried the table command? index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" Category="*" Busine... See more...
Have you tried the table command? index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" Category="*" Business_Unit = "*" Analyst_Responsible = "*" Process_Name = "*" | eval STP=(passed/heartbeat)*100 | stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP by Process_Name, Business_Unit, Analyst_Responsible | eval Average_STP=round('Average_STP',2) | table Process_Name, Analyst_Responsible, Business_Unit, Volumes, Successful, Average_STP  
Hi @kriznikm , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @dtapia , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @dtapia , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not... See more...
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not readable. here's the output in teams and this is my query  index="acoe_bot_events" unique_id = * |lookup "LU_ACOE_RDA_Tracker" ID AS unique_id |search Business_Area_Level_2="Client Solutions Insurance" , Category="*", Business_Unit = "*", Analyst_Responsible = "*", Process_Name = "*" |eval STP=(passed/heartbeat)*100 |eval Hours=(passed*Standard_Working_Time)/60 |eval FTE=(Hours/127.5) |eval Benefit=(passed*Standard_Working_Time*Benefit_Per_Minute) |stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP,sum(FTE) as FTE_Saved, sum(Hours) as Hours_Saved, sum(Benefit) as Rand_Benefit by Process_Name, Business_Unit, Analyst_Responsible |foreach * [eval FTE_Saved=round('FTE_Saved',3)] |foreach * [eval Hours_Saved=round('Hours_Saved',3)] |foreach * [eval Rand_Benefit=round('Rand_Benefit',2)] |foreach * [eval Average_STP=round('Average_STP',2)] | eval row = Process_Name . "|" . Analyst_Responsible . "|" . Business_Unit . "|" . Volumes . "|" . Successful . "|" . Average_STP | stats values(row) AS report | eval report = mvjoin(report, " ")
So how to check ownership? I have admin rights in Splunk UI and root user in AWS linux splunk instance...
Won't hurt. But I would fist tried checking ownership, not permissions.
It depends on your architecture. See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The index-time sett... See more...
It depends on your architecture. See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The index-time settings (line breaking, timestamp extraction indexed fields extraction and the such) are applied on first "heavy" (based on a full Splunk Enterprise installation, not UF) component in event's path. So if your ingestion process is UF->idx you need TAs on indexers. If you have TA with modular inputs on HF, the same HF will do the parsing stuff so for data coming from this HF you will need index-time settings in a TA there and search-time settings on SH. If you have a fairly complicated (and very unusual but I can think of a scenario where it could be used) scenario like UF1->HF1->UF2->HF2->idx you need index-time settings on HF1 since it's the first heavy component. It does the parsing and sends the data down as parsed so subsequent components don't need to parse the data.
Can I try giving chmod 755 to that app? Will that work? Or can I remove the app and install it and push it again?
While the solution with a scripted input is a nice trick from the technical point of view, as a seasoned admin I'd advise against using it, especially on environments you have limited/difficult conne... See more...
While the solution with a scripted input is a nice trick from the technical point of view, as a seasoned admin I'd advise against using it, especially on environments you have limited/difficult connectivity with. Any splunkd-spawned solutions which change the general splunkd configuration are prone to leaving your installation in a broken state should anything go wrong. Of course whether it's important depends on how critical the systems are and whether you can tolerate potential downtime vs. what you can save by doing the "automation". The risk is yours. You have been warned.
Since the app is being pulled from DS by the same process which will be using it (or spawning additional processes under the same user), the permissions on the HF should be good. On the DS of course ... See more...
Since the app is being pulled from DS by the same process which will be using it (or spawning additional processes under the same user), the permissions on the HF should be good. On the DS of course the splunkd process must be able to access the whole directory to make an archive of its contents. 0700 should be ok as long as all files and directories are owned by the user the spunkd process is running as.
Did you search the errors from your SH as per my last reply? Can you also check for  index="_internal" sourcetype="splunkd" *ta-akamai_siem* CASE(ERROR)   Please let me know how you get on and co... See more...
Did you search the errors from your SH as per my last reply? Can you also check for  index="_internal" sourcetype="splunkd" *ta-akamai_siem* CASE(ERROR)   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @Karthikeya  Since you are able to make this work when installing directly on the HF then we should be able to rule out the JRE and Proxy configuration. You've confirmed that you have a license ... See more...
Hi @Karthikeya  Since you are able to make this work when installing directly on the HF then we should be able to rule out the JRE and Proxy configuration. You've confirmed that you have a license on the HF so there shouldnt be any features disabled which might cause the issue. (And worked directly when installing on the HF). I guess the next question is "what is different" - If it works when directly installing vs deploying DS then we need to work out what is different. How did you set it up on the HF? Did you copy all the files as they were on the HF and then drop them into deployment-apps on the DS? Out of interest, were the client_id and client_secret encrypted in the inputs.conf on the HF? If so then as long as the encryption was done on the same HF then I wouldnt have expected an issue copying the encrypted value. I think the key now is to work out what is different, I would get it working again on the HF then copy the files off. Deploy from the DS and then compare the difference between the files from the working installation vs DS installation. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Studio and Classic has their pros and cons. Personally, I prefer Classic as there are more opportunities to extend the current capabilities through CSS and Javascript and custom visualisations, etc. ... See more...
Studio and Classic has their pros and cons. Personally, I prefer Classic as there are more opportunities to extend the current capabilities through CSS and Javascript and custom visualisations, etc. While it is true that Splunk development is focussed on Studio, you are tied to their release schedule, so, if you have the patience to wait until the features you want make it into a release, stick with it, otherwise, Classic might give you the features already, with the compromise of losing some of the wysiwyg features of Studio.
I checked by going to my AWS linux instance (where our Splunk instances reside) for this particular add-on folder we have drwx------- permissions in both DS and HF. Do I need to change these permissi... See more...
I checked by going to my AWS linux instance (where our Splunk instances reside) for this particular add-on folder we have drwx------- permissions in both DS and HF. Do I need to change these permissions to configure data input in HF? or these permissions are sufficient? @PickleRick 
Whitelisting is one thing but I'd verify with your proxy admins that the requests are properly passed through. Just to be on the safe side.
The indexers extract fields from events as they are read from the index.   As @PickleRick implied, the effort put into that extraction is determined by the search mode (Fast, Smart, or Verbose).  Eac... See more...
The indexers extract fields from events as they are read from the index.   As @PickleRick implied, the effort put into that extraction is determined by the search mode (Fast, Smart, or Verbose).  Each extracted field takes up memory for processing and network bandwidth to send to the SH.  Using the fields command helps reduce the number of fields retained so you have memory and bandwidth. Indexers do not decide if a field is interesting or not - the SH does that.
Yes using proxy for that in our company and whitelisted these domains as well in our AWS VPC..
It'shard to say precisely since the addon is not very talkative in terms of logs but my understanding would be that Splunk is trying to validate the config - see https://docs.splunk.com/Documentation... See more...
It'shard to say precisely since the addon is not very talkative in terms of logs but my understanding would be that Splunk is trying to validate the config - see https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ModInputsValidate to see how it works. The 404 error comes from the addon itself. Unfortunately, it's not very descriptive. And it's confusing since 404 means that resource wasn't found. Access permissions problems should be signalled with 403. You could try to check if the addon has some configurable logging (typically you'd look for log4j.properties file in case of java-based software). Are you using proxy to reach the internet?