All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I solved the problem by updating the lookup editor app to last version (4.0.2) and editing the view as explained by tomapatan
Hey @Gaikwad, The error message itself shows what the issue is. The dashboards in the Splunk App for AWS Security are powered by the macro "aws-security-cloudtrail-service". By default, the macro de... See more...
Hey @Gaikwad, The error message itself shows what the issue is. The dashboards in the Splunk App for AWS Security are powered by the macro "aws-security-cloudtrail-service". By default, the macro definition is present in the app. However, it doesn't know what index to look into. You can navigate to Settings -> Advanced Search -> Search Macros and locate the macro in the window. Then you'll need to define what index does it need to search to and add the arguments accordingly.  In addition to this, you'll also need to make sure that the permissions to the macro are adequate for a user to access it outside the app or should at least have read access to all the users for the dashboard to load the panels properly. Once the macro is defined with the appropriate index definition and the permissions are properly provided, the dashboard panels will show the expected results.   Thanks, Tejas. --- If the above solution is helpful, an upvote is appreciated. 
Hi @vijreddy30 ... pls check your Profile's inbox.. i have sent a msg to you.  this looks like a POC / test / dev environment setup.  most probably you may not need "high availability" at all.  ... See more...
Hi @vijreddy30 ... pls check your Profile's inbox.. i have sent a msg to you.  this looks like a POC / test / dev environment setup.  most probably you may not need "high availability" at all.  we may need more details about what you meant by "high availability"..  thanks.   
Hi @Eyal, for joinining a lookup you don't need the join command that anyway shoud be avoided all the times and used only when there isn't any other solution. You can use the lookup command that's ... See more...
Hi @Eyal, for joinining a lookup you don't need the join command that anyway shoud be avoided all the times and used only when there isn't any other solution. You can use the lookup command that's a left join, something like this: <your_search> | lookup my_list.csv group_name OUTPUT Severity | where isnotnull(Severity) if the field to use gor joining is different beteen the main swarch and the lookup, you can use AS. for more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Lookup Ciao. Giuseppe
Hey, Thanks for the info, it was just the management port that needed changing, I picked a different port, finished the installation and everything seems to be working as expected. Thanks again for... See more...
Hey, Thanks for the info, it was just the management port that needed changing, I picked a different port, finished the installation and everything seems to be working as expected. Thanks again for the information Jamie
Hey @jamie1, It is completely okay to have Splunk services running on different ports other than the default ones. You just need to make sure that the appropriate configuration files are updated wit... See more...
Hey @jamie1, It is completely okay to have Splunk services running on different ports other than the default ones. You just need to make sure that the appropriate configuration files are updated with the port that you change. For eg: Default ingestion port is 9997. However, if you wish to change it to let's say 3434, then you need to make sure to update the inputs.conf and outputs.conf with the same value for the appropriate stanza.   Thanks, Tejas.   ---- If the above solution is helpful, an upvote is appreciated. 
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to fi... See more...
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to find any event of adding to one of the monitored groups, But also to enrich my final table with the severity right next to the group_name. I have tried to resolve this using: | join type=left group_name [| inputlookup my_list.csv] | where isnotnull(Severity) But somehow only 2 groups with low severity is being found even though all the groups in the list has its own severity. How can I managed to make my table show the group with its severity?
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web... See more...
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web server, etc. ) does Splunk Enterprise Security support? And also the support for Search head and Indexer OS, Is it windows server or Linux? because I could not find out in their documentation  or over the internet   Thank you in advance!    
In zone-  HF instance and (SH+Indexer) one instance same Zone -2 also. Here my project there is no Deployer, indexer cluster and SH cluster and there is no Cluster master also.   How do i implemen... See more...
In zone-  HF instance and (SH+Indexer) one instance same Zone -2 also. Here my project there is no Deployer, indexer cluster and SH cluster and there is no Cluster master also.   How do i implement High Availability server?
Hi richgalloway, Thank you for reply, I did try as you suggested with lookup command and it didn't work but.... Because of you response I went and tried it again, this time utilizing lower() option... See more...
Hi richgalloway, Thank you for reply, I did try as you suggested with lookup command and it didn't work but.... Because of you response I went and tried it again, this time utilizing lower() option and finding it work   Thank you for help 
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt ... See more...
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt port [8089] - port is already bound. Splunk needs to use this port.   The port is in fact in use and thus the only solution seems to be using a non-default port. Will this cause any issues in the long run or is running Splunk on different ports completely supported? As I have only ever used the default ports for previous installations. Any help/info would be appreciated, Jamie
Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes ... See more...
Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes does not contain this parse only the _time field. What may cause this issue? In addition, I am not sure but I have found something related to "DATETIME_CONFIG = /etc/datetime.xml" might be a good point not much on the internet that explain pretty well how to resolve this. Would appreciate your help here  
Yeah this seems to mitigate the problem.  Unfortunatly if you are using chrome you have to change your global system settings.  I hope this will get patched in the next update.
Thanks alot , i have one more questions ,   I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x... See more...
Thanks alot , i have one more questions ,   I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It can't get ip-dst from misp instance , Can you help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i w... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Did you check the values of Report_Id I mentioned earlier @mdsnmss ? Are they repeating or all unique?
Try removing panels one at a time until you find the one that is causing the problem (there my be more than one), then look to fix that one.
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding ... See more...
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding the archived data to GCS Bucket.
Good morning Ryan, for the first link doc you passed to me this is the description of the processing time: I must say it doesn't say anything if value is expressed in Seconds or Milliseconds. ... See more...
Good morning Ryan, for the first link doc you passed to me this is the description of the processing time: I must say it doesn't say anything if value is expressed in Seconds or Milliseconds. In the second link the processing time I'm founding is the following and it's about the pre-build metric which was clear to me that was expressed in seconds: My question was the following: I'm in the analytics section I'm investigating the sap_idoc_table I double click one single row I see a field called PROCESSING_TIME = 0011123 like the following How can I know in what measure this is expressed? (from what piece of documentation? or from where in the product?) In AppD I went into the default dashboard provided for the Idoc and I double clicked on table with the title "Idoc Errors". There it seems we have the same field but mapped as follow: Am I correct to assume that this field is equal to the one called "PROCESSING_TIME" seen in the analytics engine ? If so, I'm wondering how comes that here I can see it mapped and explicitly declared and in the analytics I can only see the value as a string of text. Best regards
Splunk app for AWS security dashboard shows '0' data, need help to fix this issue   when I try to run/edit query shows error as below