UrbanPro
true

Take BTech Tuition from the Best Tutors

  • Affordable fees
  • 1-1 or Group class
  • Flexible Timings
  • Verified Tutors

Search in

Learn BTech Information Science Engineering with Free Lessons & Tips

Ask a Question

Post a Lesson

All

All

Lessons

Discussion

Lesson Posted on 24/02/2018 Learn BTech Information Science Engineering +6 VB.NET Computer Science and Applications Computer Science & Information Technology; Engineering Diploma Tuition BTech Computer Science Engineering BSc Computer Science

Infix Expression To Post-fix Expression Conversion Procedure

SR-IT Academy

SR - IT Academy is one of the leading tutorial point providing services like tutoring and computer training...

Algorithm 1. Scan the infix expression from left to right. 2. If the scanned character is an operand, output it. 3. Else, a. If the precedence of the scanned operator is greater than the precedence of the operator in the stack (or the stack is empty), push it. b. Else, Pop the operator from the... read more

Algorithm

1. Scan the infix expression from left to right.

2. If the scanned character is an operand, output it.

3. Else,

a. If the precedence of the scanned operator is greater than the precedence of the operator in the stack (or the stack is empty), push it.

b. Else, Pop the operator from the stack until the precedence of the scanned operator is less-equal to the precedence of the operator residing on the top of the stack. Push the scanned operator to the stack.

4. If the scanned character is an ‘(‘, push it to the stack.

5. If the scanned character is an ‘)’, pop and output from the stack until an ‘(‘is encountered.

6. Repeat steps 2-6 until infix expression is scanned.

7. Pop and output from the stack until it is not empty.

read less
Comments
Dislike Bookmark

Lesson Posted on 03/01/2018 Learn BTech Information Science Engineering

Radiation: Full Concept

Debraj Paul

I had completed MSc in Maths I am an experienced, qualified icse board school teacher with over 12+...

Heat transfer mechanism in which no medium is required is called radiation. It refers to the movement of heat in waves, as it does not need molecules to travel through. The object need not be in direct contact with one another to transmit heat. Whenever you feel heat without actually touching the object,... read more

Heat transfer mechanism in which no medium is required is called radiation. It refers to the movement of heat in waves, as it does not need molecules to travel through. The object need not be in direct contact with one another to transmit heat. Whenever you feel heat without actually touching the object, it is because of radiation. Moreover, colour, surface orientation, etc. are some of the surface properties on which radiation depends greatly.

In this process, the energy is transmitted through electromagnetic waves called as radiant energy. H0t objects generally emit thermal energy to cooler surroundings. Radiant energy is capable of travelling in the vacuum from its source to the cooler surroundings. The best example of radiation is solar energy that we get from the sun, even though, it is miles aways from us.

read less
Comments
Dislike Bookmark

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +3 BTech Computer Science Engineering BCA Tuition BSc Computer Science

What Are Register Variables?

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

Registers are faster than memories to access. Hence when we declare a variable with a registerkey word, the compiler gets to know that the variable can be put to registers. Now whether the variables will be put into the register indeed or not, that depends on the compiler and the number and size of the... read more

Registers are faster than memories to access. Hence when we declare a variable with a registerkey word, the compiler gets to know that the variable can be put to registers. Now whether the variables will be put into the register indeed or not, that depends on the compiler and the number and size of the registers for the corresponding hardware.

In general, the variables which are to be used with high frequencies (like loop variables) are the perfect choice for declaring as register, because the gain in speed would be considerable in those cases.

It is not allowed to access the address of the register variables as accessing the address of the register is illegal.

It is perfectly all right to declare a pointer as register because a register can always store the address of other variables if the size of the register permits that to do.

read less
Comments
Dislike Bookmark

Take BTech Tuition from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +3 BTech Computer Science Engineering BCA Tuition BSc Computer Science

What Is The Difference Between Scope And Lifetime?

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

Scope of a variable is defined as the block of code from where we can refer or access it. On the other hand the life time of a variable is defined as the time in between allocating memory for it and relinquishing the memory from it. Let us have an example void func1(void){ int x = 5; // do other stuffs } void... read more

Scope of a variable is defined as the block of code from where we can refer or access it. On the other hand the life time of a variable is defined as the time in between allocating memory for it and relinquishing the memory from it. Let us have an example

void func1(void){

int x = 5;

// do other stuffs

}

void func2(void){

int y = 10;

func1();

//do other stuffs

}

In the above example scope of x is func1 and scope of y is func2. Now when we call func1from func2, then inside func, scope of y is ended but the life time of y still persists, because the memory for y is still not relinquished.

read less
Comments
Dislike Bookmark

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +4 Design & Analysis of Algorithms BTech Computer Science Engineering BCA Tuition BSc Computer Science

Understanding Big Omega (?), Big Theta (?), Small Oh (o) And Small Omega (?) Notations

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

How to describe Big Omega(Ω) ? If run time of an algorithm is of Ω(g(n)), it means that the running time of the algorithm (as n gets larger) is at least proportional to g(n). Hence it helps in estimating a lower bound of the number of basic operations to be performed. More specifically,... read more
How to describe Big Omega(Ω) ?
 
If run time of an algorithm is of Ω(g(n)), it means that the running time of the algorithm (as n gets larger) is at least proportional to g(n). Hence it helps in estimating a lower bound of the number of basic operations to be performed.
 
More specifically, f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)
 
Mathematically, a function f(x) is equal to Big Omega of another function g(x), i,e f(x) = Ω(g(x) is true if and only if there exists two constants (C1 and C2)such that
 
a) C1 and C2 are always positive
b) 0<= C1*g(n) <= f(n) for any value of n => C2
 
 
How to describe Big Theta (Θ)?
 
If run time of an algorithm is of Θ(g(n)), it means that the running time of the algorithm (as n gets larger) is equal to the growth rate of g(n). Hence it helps in estimating a tight bound of the number of basic operations to be performed.
 
Hence f(x) = Θ(g(x)) (big - theta) means that the growth rate of f(x) is asymptotically equal to the growth rate of g(x)
 
Mathematically, a function f(x) is equal to Big Theta of another function g(x), i,e f(x) = Θ(g(x) is true if and only if there exists three constants (C1 and C2 and C3)such that
 
a) C1, C2 and C3 are always positive
b) 0<= C1*g(n) <= f(n) <= C2*g(n) for any value of n => C3
 
What are Small Oh and Small Omega?
 
f(x) = o(g(x)) (small-oh) means that the growth rate of f(x) is asymptotically less than to the growth rate of g(x).
Mathematically, a function f(x) is equal to Small Oh of another function g(x), i,e f(x) = o(g(x) is true if and only if there exists two constants (C1 and C2)such that
 
a) C1 and C2 are always positive
b) 0<= f(n) <C1*g(n) for any value of n => C2
 
So this gives a loose upper bound for complexities of f(x).
 
On the other hand, f(x) = ω(g(x)) (small-omega) means that the growth rate of f(x) is asymptotically greater than the growth rate of g(x).
 
Mathematically, a function f(x) is equal to Small Omega of another function g(x), i,e f(x) = ω(g(x) is true if and only if there exists two constants (C1 and C2)such that
 
a) C1 and C2 are always positive
b) 0<= C1*g(n) < f(n) for any value of n => C2
 
So this gives a loose lower bound for complexities of f(x).
read less
Comments
Dislike Bookmark

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +4 Design & Analysis of Algorithms BTech Computer Science Engineering BSc Computer Science BCA Tuition

A Tutorial On Dynamic Programming

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

What is Dynamic Programming? Dynamic Programming, DP in short, is an intelligent way of solving a special type of complex problems which otherwise would hardly be solved in realistic time frame. What is that special type of problems? DP is not best fit for all types of complex problems, it is well... read more

What is Dynamic Programming?

Dynamic Programming, DP in short, is an intelligent way of solving a special type of complex problems which otherwise would hardly be solved in realistic time frame.

What is that special type of problems? 

DP is not best fit for all types of complex problems, it is well suited for the problems with following characteristics only:

  1. The given problem must be decomposable in multiple smaller sub problems with similar nature.
  2. Sub problems must also be decomposable into still sub problems and this nesting should continue till a stage arises when the current sub problems could be solved with least cost.
  3. At any particular stage, the sub problems must be interdependent.
  4. At any specific stage the problem could be solved by solving its sub problems.

So this is like Divide and Conquer. Isn't it?

Wait. Don't jump into any conclusion right now. We have not studied the DP procedures yet! we are currently studying the nature of DP problems only. So it will be too early to compare Divide and Conquer with DP.

But this is true, that just a mere look at the characteristics of DP problems gives a feeling that DP is close to Divide and Conquer. But there is a MAJOR difference, in Divide and Conquer, at any stage, the sub problems enjoy No Inter-Dependencies.

Let us put this on hold. We will revert back to this topic once more only after we have got some experience in DP. But for the time being, take it from me that DP and Divide & Conquer are not same.

Why do not you say how DP works? 

DP works as simple as possible. After the problem is broken down to trivial sub-problems in levels, DP starts solving the sub problems bottom up. Once a sub problem is solved, solution is written down (saved) into a table so that if the same sub problem reappears, we get a ready made solution. This is done in every level.

What is so special about this process?

Yes. It is special. See while you decompose a problems into sub problems, sub problems into sub sub problems and so on, you ultimately reach to a point when you need to solve the same sub problems many a times. And this "many" is not a child's play in real life. Lots of computational resources are unnecessarily spent on this which makes the process sluggish with poor time complexity values. 

With DP you can avoid the re computations of the same sub problems.This will save a lot of time and other computational resources. But there are table look ups for solving reappearing sub problems, and this is not going to offer a bed of roses. This will surely take some time, but with hashing and some other smart alternatives,  table look ups can be made easy. 

Getting out of my head! Show me how DP works with an example. 

 Lets think of Fibonacci series. We know that Fib(n) = Fib(n-1) + Fib(n-2). Hence to compute Fib(4),

Fib(4) = Fib(3) + Fib(2)

          = (Fib(2) + Fib(1)) + Fib(2)

          = ((Fib(1) + Fib(0)) + Fib(1)) + Fib(2)

          = ((Fib(1) + Fib(0)) + Fib(1)) + (Fib(1) + Fib(0))

This could easily be done with Divide and Conquer with recursion. But there will be several call for the trivial case Fib(1) and Fib(0).

But in DP style we can think that

                                                          Fib(4)  ------------------- level 0

                                                   Fib(3) + Fib(2) ------------------ level 1

                                          Fib(2) + Fib(1)-------------------------- level 2

                                 Fib(1) + Fib(0) -------------------------------- level 3

 There will be only two calls of Fib(1) and Fib(0) altogether (shown in violet at level 3). The second Fib(1) (shown in red at level 2) will not be called at all as the result of that is already with us. Similarly Fib(2) (shown in green at level 1) will not be called at all as it has already been computed at level 2. In this way we could avoid re-computations in two occasions.

This is the strength of DP. This strength seems trivial with this trivial example, but as the problem size will grow, this strength will seem to have prominent advantage.  Just Think of fib(100)

Is Dynamic Programming a process to find solution quicker?

Yes it is, but the nature of the solution we are talking to is an optimal solution not the exact one. The principle of optimality applies if the optimal solution to a problem always contains optimal solutions to all subproblems . Take the following example:

Let us consider the problem of making N Rupees with the fewest number of Rupee coins.

Either there is a coin of value N Rupees (Exact Solution), or the set of coins making up an optimal solution for N Rupees can be divided into two nonempty subsets, n1 Rupees and n2 Rupees (Optimal Solution).

If any of the n1 Rupees or nRupees, can be made with fewer number of coins, then clearly N can be made with fewer coins, hence solution was not optimal.

Tell me more on Principle of Optimality:

The principle of optimality holds if

Every optimal solution to a problem contains optimal solutions to all subproblems

The principle of optimality does not say

If you have optimal solutions to all subproblems then you can combine them to get an optimal solution

Example: If we have infinite numbers of coins of 1cents, 5 cents, 10 cents only then optimal solution to make 7 cents is 5 cents + 1 cent + 1 cent, and the optimal solution to make 6 cents  is 5 cents + 1 cent, but the optimal solution to make 13 cent is NOT 5 cents + 1 cent + 1 cent + 5 cents + 1 cent

But there is a way of dividing up 13 cents into subsets with optimal solutions (say, 11 cents + 2 cents) that will give an optimal solution for 13 cents. Hence, the principle of optimality holds for this problem.

Let us see one example where Principle of Optimality does not hold.

In the following graph

The longest simple path (path not containing a cycle) from A to D is A->B->C->D. However, the sub path A->B is not the longest simple path from A to B (A->C->B is longer)

The principle of optimality is not satisfied for this problem. Hence, the longest simple path problem cannot be solved by a dynamic programming approach.

Can you give me a thorough example of DP? 

Why not! Following is an example of working of Dynamic Programming. It presents a comparative analysis of working of DP with the working of Divide & Conquer.

Let us consider the Coin Counting Problem. It is to find the minimum number of coins to make any amount, given a set of coins.  

If we are given with a set of coin of 1 unit, 10 units and 25 units, then to make 31, how many minimum numbers of coins are required?

Let us check whether the greedy method will work or not. Greedy Method says that just choose the largest coin that does not overshoot the desired amount. So at the first step, we will take one coin of 25 units and then successively in each of next six steps, we will take one 1 unit coin. So ultimately the solution of Greedy Method is 31 = 25 + 1 + 1 + 1 + 1 + 1 + 1 (Total 7 nk!umbers of coins needed)

But evidently there is better solution like 31 = 10 + 10 + 10 + 1 (only 4 numbers of coins are needed). 

Hence Greedy Method will never work.

Now let us check whether any better algorithm exists or not! What about the following?

To make K units:

If there is a K unit coin, then that one coin is the minimum

Otherwise, for each value i < K,

Find the minimum number of coins needed to make i units

Find the minimum number of coins needed to make K - i units

Choose the value of i that minimizes this sum

 Yes. This will work. This is actually following Divide & Conquer Principle but there are two problems with this. This solution is very recursive and it requires exponential amount of work to be done.

 Now, if we fix the given set of coins as 1, 5, 10, 21, 25 and if the desired amount is 63, then the previous solution will require solving 62 recursive sub problems. 

What If we think to choose the best solution among?

  •   One 1 unit coin plus the best solution for 62 units
  •   One 5 units coin plus the best solution for 58 units
  •   One 10 units coin plus the best solution for 53 units
  •   One 21 units coin plus the best solution for 42 units
  •   One 25 units coin plus the best solution for 38 units

In this case, we need to solve only 5 recursive sub problems. So obviously, this second solution is better than the first solution. But still, this second solution is also very expensive. 

Now let us check how DP can solve it!

To make it shorter in length, let us think that desired value is 13 units and the set of coin given is 1 unit, 3 units, and 4 units.

DP solves first for one unit, then two units, then three units, etc., up to the desired amount and saves each answer in a table (Memoization). Hence it goes like

For each new amount N, compute all the possible pairs of previous answers which sum to N

For example, to find the solution for 13 units,

First, solve for all of 1 unit, 2 units, 3 units, ..., 12 units

Next, choose the best solution among:

  • Solution for 1unit   +   solution for 12 units
  •  Solution for 2 units   +   solution for 11 units
  •  Solution for 3 units   +   solution for 10 units
  •  Solution for 4 units   +   solution for 9 units
  •  Solution for 5 units   +   solution for 8 units
  •  Solution for 6 units   +   solution for 7 units

This will run like following:

There’s only one way to make 1unit (one coin)

To make 2 units, try 1 unit +1 unit (one coin + one coin = 2 coins)

To make 3 units, just use the 3 units coin (one coin)

To make 4 units, just use the 4 units coin (one coin)

To make 5 units, try

  • 1 unit + 4 units (1 coin + 1 coin = 2 coins)
  •  2 units + 3 units (2 coins + 1 coin = 3 coins)
  •  The first solution is better, so best solution is 2 coins

To make 6 units, try

  •  1 unit + 5 units (1 coin + 2 coins = 3 coins)
  •  2 units + 4 units (2 coins + 1 coin = 3 coins)
  •  3 units + 3 units (1 coin + 1 coin = 2 coins) – best solution

Etc.

Ok. I got it but how could you say that computationally this is the best?

The first algorithm is recursive, with a branching factor of up to 62. Possibly the average branching factor is somewhere around half of that (31). So the algorithm takes exponential time, with a large base.

The second algorithm is much better—it has a branching factor of 5. Still this is exponential time, with base 5.

The dynamic programming algorithm is O(N*K), where N is the desired amount and K is the number of different kinds of coins. 

So I don’t hesitate to say that computationally, DP algorithm works best among these threes.

What is this Matrix Chain Multiplication Problem?

Suppose we have a sequence or chain A1, A2, …, An of n matrices to be multiplied. That is, we want to compute the product A1A2…An. Now there are many possible ways (parenthesizations) to compute the product.

Let us consider the chain A1, A2, A3, A4 of 4 matrices. Now to compute the product A1A2A3A4,there are 5 possible ways as described below:

 (A1(A2(A3A4))), (A1((A2A3)A4)), ((A1A2)(A3A4)), ((A1(A2A3))A4), (((A1A2)A3)A4)

 Each of these options may lead to the different number of scalar multiplications, and we have to select the best one (option resulting fewer number of Scalar Multiplications)

 Hence the problem statement looks something like:  “Parenthesize the product A1A2…Asuch that the total number of scalar multiplications is minimized”

 Please remember that the objective of Matrix Chain Multiplication is not to do the multiplication physically, rather the objective is to fix the way of that multiplication ordering so that the number of scalar multiplication gets minimized.

 Give me one real example:

 Ok. But before I show you one real example, let us revisit the algorithm which multiplies two matrices. It goes like following:

 Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)

Result: Matrix Cp×r resulting from the product A·B

MATRIX-MULTIPLY(Ap×q , Bq×r)

  1. for i ← 1 to p
  2. for j ← 1 to r
  3. C[i, j] ← 0
  4. for k ← 1 to q
  5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
  6. return C

 In the above algorithm, scalar multiplication in line 5 dominates time to compute Number of scalar multiplications read less

Comments
Dislike Bookmark

Take BTech Tuition from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +3 BTech Computer Science Engineering BCA Tuition BSc Computer Science

Do You Give Enough Importance In Writing Main () In C Language?

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

I am sure; you guys are quite familiar with C programming, especially, while it comes to write the first line of your main function. In my class, I have seen many of my students are writing main function with many different forms. Some portion of students write like main(), some write main(void), again... read more

I am sure; you guys are quite familiar with C programming, especially, while it comes to write the first line of your main function. In my class, I have seen many of my students are writing main function with many different forms.

 Some portion of students write like main(), some write main(void), again some of the students write void main(void) and int main(void). Now the question is which one is right?

All are right. Generally you write your C programs with Turbo C editor, or with any other, and generally they do not complain if you write any of them. But still, there is a factor of theoretical perfection. Trivial programming exercises may not be affected with whatever way you follow to declare main function, but in critical cases, your decision may make differences. This is why you should know which one is theoretically perfect.

 As we always know that main is a function which gets called at the beginning of execution of any program. So like other functions, main must have a return type and it must expect arguments. If you are not calling your program (or your main likewise) from the command prompt, your mainfunction should not bother for any arguments, so actually you should call your main with void argument, like main(void).

Now you may raise a point that main() itself suggest that no arguments are passed, so what is wrong with it? See one thing, in your current compiler, the default argument is void, but the same may not be true with other compilers or platforms. So you may end up with portability issues! Keep in mind, A Good Programmer Never Rely On Defaults. So please don’t rely on default things any more, clearly write that your main function is not expecting any arguments by writing main(void).

You already have an idea that a function can never work unless and until it is being called by any other program component. This is true for main also. The function main must be called by someone! Who is that? Who calls main function? It is the operating system who calls main function. Now the thing is, your operating system may decide to do any other job according to the success or failure of running your main function. It may so happen that if your program runs successfully, operating system will call a program P1, and if your program does not work, your operating system will call another program P2. So what if your operating system has no idea that your program did run actually or not? There lies the importance of return type of main. Your main function should return something to your operating system back to indicate whether it has ran successfully or not. The rule is if operating system gets an integer 0 back, it takes as main has ran successfully and any non zero value indicates the opposite. So your main program should return an integer. Hence you should write int main(void). Just to add with this, don’t forget to return 0 at the end of your main program so that, if all of your codes run successfully, your main function is capable of returning 0 to the operating system back, indicating that it was a success.

In this point you may argue, that your programs runs quite well even if you do not provide return type to main and you just write main(void). How come it is possible?

Once again, at your age, you are writing just some of trivial academic codes. The situations here are not such critical that your operating system may decide any other thing depending on the success or failure of your main(). But, in near future, you are to write such critical programs, so get prepared from now.

Cool, so to conclude, int main(void) is perfect to write and I encourage you to write like that along with a return 0 at the last line.  

read less
Comments
Dislike Bookmark

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +3 BTech Computer Science Engineering BCA Tuition BSc Computer Science

Pointers And References

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

Are reference and pointers same? No. I have seen this confusion crumbling up among the student from the first day. So better clear out this confusion at thevery beginning. Pointers and reference both hold the address of other variables. Up to this they look similar, but their syntax and further consequences... read more

Are reference and pointers same?

No.

I have seen this confusion crumbling up among the student from the first day. So better clear out this confusion at thevery beginning.

Pointers and reference both hold the address of other variables. Up to this they look similar, but their syntax and further consequences are totally different. Just consider the following pieces of code

Code 1                                         Code 2

int i;                                             int i;

int *p = &i;                                 int &r = i ;

Here in code 1 we have declared and defined one integer pointer p which points to variable i, that is now , p holds the address of i.

In code 2, we have declared and defined one integer reference r which points to variable i, that is now, r holds the address of i. This is completely same as that of p.

So where is the difference?

The first difference can be found just by looking at the code. Their syntaxes!

 Secondly the difference will come up when they would be used differently to assign a value (suppose 10) to i.

 If you are using a pointer, you can do it like *p = 10; but if you are using a reference, you can do it like r = 10.  Just be careful to understand that when you are using pointers, the address must be dereferenced using the *, whereas, when you are using references, the address is dereferenced without using any operators at all.

 This notion leaves a huge effect as consequences. As the address of the variable is dereferenced by * operator, while using a pointer, you are free to do any arithmetic operations on it. That is you can increment the pointer p to point to the next address just by doing p++. But, this is not possible using references.  So a pointer can point to many different elements during its lifetime; where as a reference can refer to only one element during its life time.

Does C language support references?

No. The concept of reference has been added to C++, not in C. So if you run the following code, C compiler will object then and there.

 #include

#include

int main(void)

{

    int i;

    int &r = i;

    r = 10;

    printf("\n Value of i assigned with reference r = %d",i);

    getch();

    return 0;

}

 But if you are using any C++ compiler, this code will work fine as expected.

 If there is no concept of reference in C language, then how come there exists C function call by reference?

Strictly speaking, there is no concept of function call by reference in C language. C only supports function call by value. Though in some books ( I will not name any one) it is written that C supports function call by reference or the simulation of  function call by reference can be achieved through pointers, I will strongly say that C language neither directly supports function call by reference, nor provides any other mechanism to simulate the same effect.

I know you are at your toes to argue that what about calling a C function with address of a variable and receiving it with a pointer? The change made to that variable within the function has a global effect. How this cannot be treated as an example of function call by reference?

You probably argue with a code like following

 #include

#include

void foo(int* p)

{

     *p = 5;

      printf("\n Inside foo() the value of the variable: %d",*p);

}

int main(void)

{

    int i = 10;

    printf("\n before calling  foo() the value of the variable: %d",i);

    foo(&i);

    printf("\n after calling  foo() the value of the variable: %d",i);

    getch();

    return 0;

}      

  Your code will show the result as 

Your points are well taken. But the thing is what you are showing is not at all calling a function by reference. It just the function call by value! Here you are essentially copying the value of address of your variable i and calling the function foo with that copy. Now eventually in this case, the value that is being passed contains the address of another variable. Within the function, you are accepting this value with a pointer and changing the value of the content addressed by that pointer. So it is nothing but a function call by value only.

 Please note that to change the value of the content addressed by a pointer, you are to use *, no way could it be thought of as a reference.

 Now let me give you one example of true function call by reference

 #include

#include

void foo (int& r1)

{

     r1 = 5;

     printf("\n Inside foo() the value of the variable: %d", r1);

}

int main(void)

{

    int i = 10;

    int &r = i;

    printf("\n before calling  foo() the value of the variable: %d",i);

    foo(r);

    printf("\n after calling  foo() the value of the variable: %d",i);

    getch();

    return 0;

}

 Will this run with your C compiler? No.

 Note:: I have used DevC++ as the coding platform

 

read less
Comments
Dislike Bookmark

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering +3 BTech Computer Science Engineering BCA Tuition BSc Computer Science

Some Interview Questions And Answers For Fresher Level On Pointers

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

What is a void pointer? Void pointer is a special type of pointer which can reference or point to any data type. This is why it is also called as Generic Pointer. As a void pointer can point to any data type, neither it is possible to dereference a void pointer, nor is it possible to do any arithmetic... read more

What is a void pointer?

Void pointer is a special type of pointer which can reference or point to any data type.  This is why it is also called as Generic Pointer.

As a void pointer can point to any data type, neither it is possible to dereference a void pointer, nor is it possible to do any arithmetic operations on void pointer. Hence an explicit type casting is absolutely must for using a void pointer.

What is a null pointer?

A null pointer is also a special type of pointer which points definitively to nothing.

As per official C language description, every valid type of pointer has a special value called “Null Pointer” and this value is always distinguishable from all other pointer values and it is guaranteed to compare unequal to a pointer of any object or function.  

Hold it for a second! So what you are saying is there are different versions of null pointers for every pointer type. Is it so?

No. We cannot call it as a “different versions” rather their internal values are different. But as a programmer, we need not to worry about it because compilers take care of and track the internal values (may be differently depending on the compiler types)

So why don’t you say that a null pointer is a pointer which is not initialized yet?

I cannot go with this idea because an uninitialized pointer may point to any thing. There might be an uninitialized char pointer, an uninitialized int pointer and so on. But a null pointer is definitely something which is there not to point anything.

Ok. Now tell me what is a dangling pointer?

Dangling pointers are the pointers which point to already de allocated or de referenced memory locations. Suppose I write a piece of code like this

int*p;

p = (int*)malloc(n*sizeof(int);

----

----

free(p);

In this piece of code, after allocating some memory to p, we are de allocating the memory which was initially pointed by p. Hence now onwards, p acts as dangling pointer.

How to rectify this?

In this case, after de allocation that is after using free() we should set the pointer p to null as p = NULL   

Could you please exhibit any other source of dangling pointer problems?

Well, there might be other sources also, like

Suppose I have declared int*p, and within a specific scope, I have defined another integer x as 10; and then within that specific scope I am making the pointer p to hold the base address of x. Now outside this specific scope the p will act as a dangling pointer. I can write the code as

void foo (void)

{

   int *p;

   {

          {

int x = 10;

                 p = &x;

          }

          // here p will act as dangling pointer

   }

}

Another case might occur when from a function, we return an address of any variable and a corresponding pointer in the receiver section receives it, then the receiving pointer might get into dangling pointer problem, as just after the returning, as the variable has passed it scope, the memory stack (where the variable is stored currently) might be refreshed. But this phenomenon is not obvious, this might happen.

Fine, so how to deal with this second case?

To deal with this, to be in safer side, we must have the scope of the variable intact out of the function call; declaring it as static might be an option.

read less
Comments
Dislike Bookmark

Take BTech Tuition from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Answered on 12/02/2017 Learn BTech Information Science Engineering +1 BTech Tuition

Prashanth Kannadaguli

BEST Technical Trainer & Freelancer

Can change or create new.
Answers 22 Comments
Dislike Bookmark

About UrbanPro

UrbanPro.com helps you to connect with the best BTech Tuition in India. Post Your Requirement today and get connected.

Overview

Lessons 15

Total Shares  

+ Follow 114,704 Followers

You can also Learn

Top Contributors

Connect with Expert Tutors & Institutes for BTech Information Science Engineering

x

Ask a Question

Please enter your Question

Please select a Tag

X

Looking for BTech Tuition Classes?

The best tutors for BTech Tuition Classes are on UrbanPro

  • Select the best Tutor
  • Book & Attend a Free Demo
  • Pay and start Learning

Take BTech Tuition with the Best Tutors

The best Tutors for BTech Tuition Classes are on UrbanPro

This website uses cookies

We use cookies to improve user experience. Choose what cookies you allow us to use. You can read more about our Cookie Policy in our Privacy Policy

Accept All
Decline All

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 55 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 7.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more